Nov 24 08:53:42 crc systemd[1]: Starting Kubernetes Kubelet... Nov 24 08:53:42 crc restorecon[4642]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:42 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:53:43 crc restorecon[4642]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 08:53:43 crc restorecon[4642]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 24 08:53:44 crc kubenswrapper[4719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 08:53:44 crc kubenswrapper[4719]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 24 08:53:44 crc kubenswrapper[4719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 08:53:44 crc kubenswrapper[4719]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 08:53:44 crc kubenswrapper[4719]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 24 08:53:44 crc kubenswrapper[4719]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.291295 4719 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.296990 4719 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297011 4719 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297015 4719 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297019 4719 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297022 4719 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297026 4719 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297032 4719 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297066 4719 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297074 4719 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297080 4719 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297086 4719 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297091 4719 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297096 4719 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297101 4719 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297104 4719 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297109 4719 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297113 4719 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297116 4719 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297120 4719 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297124 4719 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297129 4719 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297133 4719 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297136 4719 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297140 4719 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297143 4719 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297147 4719 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297151 4719 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297154 4719 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297158 4719 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297161 4719 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297165 4719 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297168 4719 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297173 4719 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297176 4719 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297179 4719 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297183 4719 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297186 4719 feature_gate.go:330] unrecognized feature gate: Example Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297191 4719 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297195 4719 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297200 4719 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297203 4719 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297208 4719 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297213 4719 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297216 4719 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297220 4719 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297223 4719 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297227 4719 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297230 4719 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297234 4719 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297237 4719 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297240 4719 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297244 4719 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297247 4719 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297251 4719 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297254 4719 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297257 4719 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297261 4719 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297264 4719 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297268 4719 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297271 4719 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297275 4719 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297278 4719 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297283 4719 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297287 4719 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297291 4719 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297294 4719 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297298 4719 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297302 4719 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297306 4719 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297309 4719 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.297313 4719 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298057 4719 flags.go:64] FLAG: --address="0.0.0.0" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298073 4719 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298082 4719 flags.go:64] FLAG: --anonymous-auth="true" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298088 4719 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298095 4719 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298099 4719 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298105 4719 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298111 4719 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298115 4719 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298120 4719 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298127 4719 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298133 4719 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298138 4719 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298143 4719 flags.go:64] FLAG: --cgroup-root="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298149 4719 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298154 4719 flags.go:64] FLAG: --client-ca-file="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298159 4719 flags.go:64] FLAG: --cloud-config="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298164 4719 flags.go:64] FLAG: --cloud-provider="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298169 4719 flags.go:64] FLAG: --cluster-dns="[]" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298177 4719 flags.go:64] FLAG: --cluster-domain="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298182 4719 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298187 4719 flags.go:64] FLAG: --config-dir="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298192 4719 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298198 4719 flags.go:64] FLAG: --container-log-max-files="5" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298205 4719 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298210 4719 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298215 4719 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298219 4719 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298224 4719 flags.go:64] FLAG: --contention-profiling="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298227 4719 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298232 4719 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298236 4719 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298241 4719 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298247 4719 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298251 4719 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298255 4719 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298259 4719 flags.go:64] FLAG: --enable-load-reader="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298263 4719 flags.go:64] FLAG: --enable-server="true" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298267 4719 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298629 4719 flags.go:64] FLAG: --event-burst="100" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298634 4719 flags.go:64] FLAG: --event-qps="50" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298639 4719 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298646 4719 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298651 4719 flags.go:64] FLAG: --eviction-hard="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298658 4719 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298662 4719 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298666 4719 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298671 4719 flags.go:64] FLAG: --eviction-soft="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298675 4719 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298679 4719 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298683 4719 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298687 4719 flags.go:64] FLAG: --experimental-mounter-path="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298691 4719 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298695 4719 flags.go:64] FLAG: --fail-swap-on="true" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298699 4719 flags.go:64] FLAG: --feature-gates="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298705 4719 flags.go:64] FLAG: --file-check-frequency="20s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298710 4719 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298715 4719 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298720 4719 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298725 4719 flags.go:64] FLAG: --healthz-port="10248" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298729 4719 flags.go:64] FLAG: --help="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298733 4719 flags.go:64] FLAG: --hostname-override="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298738 4719 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298743 4719 flags.go:64] FLAG: --http-check-frequency="20s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298748 4719 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298752 4719 flags.go:64] FLAG: --image-credential-provider-config="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298756 4719 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298760 4719 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298764 4719 flags.go:64] FLAG: --image-service-endpoint="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298769 4719 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298773 4719 flags.go:64] FLAG: --kube-api-burst="100" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298777 4719 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298781 4719 flags.go:64] FLAG: --kube-api-qps="50" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298785 4719 flags.go:64] FLAG: --kube-reserved="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298791 4719 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298795 4719 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298799 4719 flags.go:64] FLAG: --kubelet-cgroups="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298803 4719 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298807 4719 flags.go:64] FLAG: --lock-file="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298811 4719 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298816 4719 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298821 4719 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298829 4719 flags.go:64] FLAG: --log-json-split-stream="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298834 4719 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298840 4719 flags.go:64] FLAG: --log-text-split-stream="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298845 4719 flags.go:64] FLAG: --logging-format="text" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298850 4719 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298856 4719 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298861 4719 flags.go:64] FLAG: --manifest-url="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298866 4719 flags.go:64] FLAG: --manifest-url-header="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298873 4719 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298879 4719 flags.go:64] FLAG: --max-open-files="1000000" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298886 4719 flags.go:64] FLAG: --max-pods="110" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298891 4719 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298896 4719 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298902 4719 flags.go:64] FLAG: --memory-manager-policy="None" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298907 4719 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298912 4719 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298918 4719 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298923 4719 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298938 4719 flags.go:64] FLAG: --node-status-max-images="50" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298943 4719 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298949 4719 flags.go:64] FLAG: --oom-score-adj="-999" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298954 4719 flags.go:64] FLAG: --pod-cidr="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298960 4719 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298969 4719 flags.go:64] FLAG: --pod-manifest-path="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298975 4719 flags.go:64] FLAG: --pod-max-pids="-1" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298982 4719 flags.go:64] FLAG: --pods-per-core="0" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298987 4719 flags.go:64] FLAG: --port="10250" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298991 4719 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.298996 4719 flags.go:64] FLAG: --provider-id="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299001 4719 flags.go:64] FLAG: --qos-reserved="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299007 4719 flags.go:64] FLAG: --read-only-port="10255" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299012 4719 flags.go:64] FLAG: --register-node="true" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299017 4719 flags.go:64] FLAG: --register-schedulable="true" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299022 4719 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299033 4719 flags.go:64] FLAG: --registry-burst="10" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299058 4719 flags.go:64] FLAG: --registry-qps="5" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299063 4719 flags.go:64] FLAG: --reserved-cpus="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299068 4719 flags.go:64] FLAG: --reserved-memory="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299075 4719 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299080 4719 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299085 4719 flags.go:64] FLAG: --rotate-certificates="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299091 4719 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299096 4719 flags.go:64] FLAG: --runonce="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299101 4719 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299106 4719 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299112 4719 flags.go:64] FLAG: --seccomp-default="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299117 4719 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299122 4719 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299127 4719 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299132 4719 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299137 4719 flags.go:64] FLAG: --storage-driver-password="root" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299142 4719 flags.go:64] FLAG: --storage-driver-secure="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299147 4719 flags.go:64] FLAG: --storage-driver-table="stats" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299152 4719 flags.go:64] FLAG: --storage-driver-user="root" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299157 4719 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299162 4719 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299167 4719 flags.go:64] FLAG: --system-cgroups="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299173 4719 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299183 4719 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299189 4719 flags.go:64] FLAG: --tls-cert-file="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299194 4719 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299202 4719 flags.go:64] FLAG: --tls-min-version="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299206 4719 flags.go:64] FLAG: --tls-private-key-file="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299211 4719 flags.go:64] FLAG: --topology-manager-policy="none" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299216 4719 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299221 4719 flags.go:64] FLAG: --topology-manager-scope="container" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299226 4719 flags.go:64] FLAG: --v="2" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299233 4719 flags.go:64] FLAG: --version="false" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299240 4719 flags.go:64] FLAG: --vmodule="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299247 4719 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299252 4719 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299401 4719 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299408 4719 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299412 4719 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299416 4719 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299420 4719 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299424 4719 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299428 4719 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299431 4719 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299435 4719 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299439 4719 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299443 4719 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299447 4719 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299451 4719 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299455 4719 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299459 4719 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299462 4719 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299466 4719 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299470 4719 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299473 4719 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299477 4719 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299480 4719 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299484 4719 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299488 4719 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299492 4719 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299495 4719 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299499 4719 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299502 4719 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299506 4719 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299509 4719 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299513 4719 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299516 4719 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299521 4719 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299524 4719 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299528 4719 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299533 4719 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299537 4719 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299541 4719 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299546 4719 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299551 4719 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299556 4719 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299562 4719 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299567 4719 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299572 4719 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299577 4719 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299581 4719 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299585 4719 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299590 4719 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299594 4719 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299599 4719 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299603 4719 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299607 4719 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299611 4719 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299616 4719 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299619 4719 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299623 4719 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299626 4719 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299630 4719 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299633 4719 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299636 4719 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299646 4719 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299649 4719 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299653 4719 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299656 4719 feature_gate.go:330] unrecognized feature gate: Example Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299661 4719 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299665 4719 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299669 4719 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299673 4719 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299677 4719 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299681 4719 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299684 4719 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.299688 4719 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.299701 4719 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.309080 4719 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.309118 4719 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309194 4719 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309203 4719 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309208 4719 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309214 4719 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309219 4719 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309223 4719 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309228 4719 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309233 4719 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309239 4719 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309245 4719 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309249 4719 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309254 4719 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309259 4719 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309264 4719 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309269 4719 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309273 4719 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309279 4719 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309285 4719 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309290 4719 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309295 4719 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309299 4719 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309304 4719 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309308 4719 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309312 4719 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309317 4719 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309321 4719 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309325 4719 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309330 4719 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309334 4719 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309338 4719 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309343 4719 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309347 4719 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309351 4719 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309356 4719 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309361 4719 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309366 4719 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309370 4719 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309375 4719 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309379 4719 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309385 4719 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309392 4719 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309398 4719 feature_gate.go:330] unrecognized feature gate: Example Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309403 4719 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309409 4719 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309413 4719 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309418 4719 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309422 4719 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309427 4719 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309431 4719 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309438 4719 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309443 4719 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309447 4719 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309452 4719 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309456 4719 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309461 4719 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309467 4719 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309473 4719 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309478 4719 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309482 4719 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309487 4719 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309492 4719 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309497 4719 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309501 4719 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309506 4719 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309511 4719 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309515 4719 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309520 4719 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309524 4719 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309528 4719 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309533 4719 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309538 4719 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.309546 4719 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309686 4719 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309696 4719 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309701 4719 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309706 4719 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309711 4719 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309717 4719 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309724 4719 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309730 4719 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309734 4719 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309741 4719 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309748 4719 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309754 4719 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309759 4719 feature_gate.go:330] unrecognized feature gate: Example Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309764 4719 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309769 4719 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309773 4719 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309778 4719 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309783 4719 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309787 4719 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309791 4719 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309796 4719 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309800 4719 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309804 4719 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309809 4719 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309814 4719 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309821 4719 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309826 4719 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309831 4719 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309836 4719 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309840 4719 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309845 4719 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309849 4719 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309854 4719 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309858 4719 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309863 4719 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309868 4719 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309872 4719 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309877 4719 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309881 4719 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309885 4719 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309889 4719 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309896 4719 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309902 4719 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309907 4719 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309912 4719 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309916 4719 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309921 4719 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309925 4719 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309929 4719 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309933 4719 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309938 4719 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309942 4719 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309946 4719 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309951 4719 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309955 4719 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309959 4719 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309964 4719 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309968 4719 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309973 4719 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309978 4719 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309982 4719 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309987 4719 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309992 4719 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.309996 4719 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.310000 4719 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.310004 4719 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.310009 4719 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.310013 4719 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.310017 4719 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.310022 4719 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.310027 4719 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.310038 4719 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.311827 4719 server.go:940] "Client rotation is on, will bootstrap in background" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.316419 4719 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.316512 4719 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.319311 4719 server.go:997] "Starting client certificate rotation" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.319343 4719 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.321024 4719 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-21 15:35:16.722510864 +0000 UTC Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.321158 4719 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.355905 4719 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.358923 4719 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 08:53:44 crc kubenswrapper[4719]: E1124 08:53:44.359946 4719 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.26:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.374475 4719 log.go:25] "Validated CRI v1 runtime API" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.410502 4719 log.go:25] "Validated CRI v1 image API" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.412351 4719 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.418306 4719 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-24-08-47-22-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.418353 4719 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.438640 4719 manager.go:217] Machine: {Timestamp:2025-11-24 08:53:44.427221586 +0000 UTC m=+0.758494868 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f09286b9-10a4-4ae2-b7f4-49183b71cd1c BootID:9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:73:f0:14 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:73:f0:14 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:1c:7d:1e Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:60:d4:5e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:2b:59:96 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:25:37:85 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:4b:5a:b6 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:26:f7:81:a2:48:c7 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e2:fa:5a:48:09:06 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.438911 4719 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.439153 4719 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.439498 4719 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.439680 4719 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.439727 4719 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.439931 4719 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.439941 4719 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.440478 4719 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.440509 4719 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.440754 4719 state_mem.go:36] "Initialized new in-memory state store" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.440845 4719 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.447891 4719 kubelet.go:418] "Attempting to sync node with API server" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.448002 4719 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.448061 4719 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.448079 4719 kubelet.go:324] "Adding apiserver pod source" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.448098 4719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.453594 4719 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.453866 4719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.453867 4719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:44 crc kubenswrapper[4719]: E1124 08:53:44.453957 4719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.26:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:53:44 crc kubenswrapper[4719]: E1124 08:53:44.453957 4719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.26:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.455295 4719 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.457253 4719 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.459874 4719 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.459989 4719 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.460137 4719 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.460216 4719 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.460305 4719 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.460368 4719 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.460420 4719 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.460476 4719 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.460528 4719 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.460578 4719 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.460650 4719 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.460707 4719 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.463798 4719 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.464427 4719 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.464446 4719 server.go:1280] "Started kubelet" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.464573 4719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.465055 4719 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.465336 4719 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.466136 4719 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.466170 4719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.466215 4719 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:48:45.391766181 +0000 UTC Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.466260 4719 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 169h55m0.925509528s for next certificate rotation Nov 24 08:53:44 crc kubenswrapper[4719]: E1124 08:53:44.466321 4719 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.466325 4719 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.466336 4719 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.466342 4719 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 24 08:53:44 crc systemd[1]: Started Kubernetes Kubelet. Nov 24 08:53:44 crc kubenswrapper[4719]: E1124 08:53:44.466902 4719 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.26:6443: connect: connection refused" interval="200ms" Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.466919 4719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:44 crc kubenswrapper[4719]: E1124 08:53:44.466970 4719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.26:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.467168 4719 factory.go:55] Registering systemd factory Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.467184 4719 factory.go:221] Registration of the systemd container factory successfully Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.468286 4719 factory.go:153] Registering CRI-O factory Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.468383 4719 factory.go:221] Registration of the crio container factory successfully Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.468536 4719 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.468624 4719 factory.go:103] Registering Raw factory Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.468712 4719 manager.go:1196] Started watching for new ooms in manager Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.469167 4719 server.go:460] "Adding debug handlers to kubelet server" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.469549 4719 manager.go:319] Starting recovery of all containers Nov 24 08:53:44 crc kubenswrapper[4719]: E1124 08:53:44.473923 4719 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.26:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187ae564a3134433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 08:53:44.464409651 +0000 UTC m=+0.795682923,LastTimestamp:2025-11-24 08:53:44.464409651 +0000 UTC m=+0.795682923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.487973 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488089 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488107 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488453 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488486 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488502 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488514 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488526 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488545 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488560 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488601 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488659 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488700 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488744 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488762 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488775 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488788 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488801 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488813 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488828 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488842 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488853 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488867 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488880 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488895 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488910 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488931 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488947 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.488964 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492099 4719 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492167 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492193 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492206 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492222 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492248 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492261 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492277 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492292 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492307 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492322 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492336 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492349 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492363 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492377 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492393 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492406 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492419 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492432 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492448 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492463 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492476 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492490 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492502 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492519 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492535 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492549 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492562 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492575 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492588 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492601 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492615 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492627 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492638 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492651 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492667 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492678 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492690 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492702 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492716 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492763 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492779 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492793 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492807 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492820 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492834 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492847 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492861 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492874 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492887 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492902 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492915 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492930 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492945 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492958 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492971 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492984 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.492997 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493010 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493023 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493041 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493071 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493083 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493096 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493109 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493122 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493135 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493148 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493160 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493172 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493199 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493212 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493226 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493239 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493251 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493262 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493280 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493293 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493306 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493319 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493333 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493351 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493384 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493399 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493411 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493423 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493437 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493449 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493478 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493491 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493504 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493515 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493527 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493538 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493550 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493563 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493706 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493729 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493741 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493755 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493767 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493777 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493791 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493802 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493816 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493829 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493842 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493856 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493869 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493881 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493895 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493908 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493923 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493938 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493952 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.493964 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494073 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494089 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494101 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494115 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494127 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494140 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494152 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494164 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494178 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494190 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494203 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494219 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494232 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494244 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494257 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494269 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494281 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494293 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494304 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494317 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494332 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494345 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494357 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494371 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494383 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494395 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494407 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494424 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494438 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494452 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494465 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494477 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494490 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494502 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494516 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494530 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494607 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494626 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494640 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494654 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494667 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494679 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494692 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494706 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494719 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494730 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494740 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494751 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494760 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494769 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494780 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494790 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494800 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494812 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494824 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494837 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494851 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494863 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494878 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494892 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494906 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494918 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494929 4719 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494946 4719 reconstruct.go:97] "Volume reconstruction finished" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.494955 4719 reconciler.go:26] "Reconciler: start to sync state" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.504194 4719 manager.go:324] Recovery completed Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.513739 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.515137 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.515180 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.515191 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.516642 4719 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.516660 4719 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.516687 4719 state_mem.go:36] "Initialized new in-memory state store" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.517145 4719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.518669 4719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.519518 4719 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.519553 4719 kubelet.go:2335] "Starting kubelet main sync loop" Nov 24 08:53:44 crc kubenswrapper[4719]: E1124 08:53:44.519681 4719 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 08:53:44 crc kubenswrapper[4719]: W1124 08:53:44.521500 4719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:44 crc kubenswrapper[4719]: E1124 08:53:44.521563 4719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.26:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.551154 4719 policy_none.go:49] "None policy: Start" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.552443 4719 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.552477 4719 state_mem.go:35] "Initializing new in-memory state store" Nov 24 08:53:44 crc kubenswrapper[4719]: E1124 08:53:44.567378 4719 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.606509 4719 manager.go:334] "Starting Device Plugin manager" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.606751 4719 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.606770 4719 server.go:79] "Starting device plugin registration server" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.607222 4719 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.607240 4719 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.608725 4719 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.608908 4719 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.608916 4719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 08:53:44 crc kubenswrapper[4719]: E1124 08:53:44.615407 4719 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.619809 4719 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.619901 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.620918 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.620967 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.620981 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.621236 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.621356 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.621408 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.622286 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.622317 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.622334 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.622365 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.622382 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.622392 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.622522 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.622557 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.622596 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.623271 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.623302 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.623314 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.623470 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.623568 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.623589 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.623597 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.623752 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.623775 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.624511 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.624589 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.624610 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.624904 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.625747 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.625789 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.626465 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.627600 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.627646 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.627607 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.627827 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.627903 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.627742 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.628178 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.628194 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.628336 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.628445 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.630231 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.630256 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.630266 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:44 crc kubenswrapper[4719]: E1124 08:53:44.667906 4719 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.26:6443: connect: connection refused" interval="400ms" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696329 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696380 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696411 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696436 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696503 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696581 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696644 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696671 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696723 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696748 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696794 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696815 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696834 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696886 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.696909 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.707543 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.708548 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.708585 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.708594 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.708620 4719 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 08:53:44 crc kubenswrapper[4719]: E1124 08:53:44.709099 4719 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.26:6443: connect: connection refused" node="crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.798692 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.798774 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.798798 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.798814 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.798831 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.798845 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.798862 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.798854 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.798894 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.798931 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.798932 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.798909 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.798878 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.798975 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799021 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799062 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799068 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799088 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799097 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799122 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799141 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799159 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799176 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799197 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799205 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799235 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799541 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799566 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799583 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.799607 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.910029 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.911481 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.911528 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.911540 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.911563 4719 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 08:53:44 crc kubenswrapper[4719]: E1124 08:53:44.911930 4719 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.26:6443: connect: connection refused" node="crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.971660 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.978178 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 08:53:44 crc kubenswrapper[4719]: I1124 08:53:44.999439 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:45 crc kubenswrapper[4719]: W1124 08:53:45.022616 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-411cf81c56c3e8ce0fde4ffe292edd83d5c3caee722dc57f8ecbc6e97288289f WatchSource:0}: Error finding container 411cf81c56c3e8ce0fde4ffe292edd83d5c3caee722dc57f8ecbc6e97288289f: Status 404 returned error can't find the container with id 411cf81c56c3e8ce0fde4ffe292edd83d5c3caee722dc57f8ecbc6e97288289f Nov 24 08:53:45 crc kubenswrapper[4719]: W1124 08:53:45.026177 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6c6e0f9c61d0e8797a8ed78781a4d9cebbc68284ce25d8acd0e4753f75050b9a WatchSource:0}: Error finding container 6c6e0f9c61d0e8797a8ed78781a4d9cebbc68284ce25d8acd0e4753f75050b9a: Status 404 returned error can't find the container with id 6c6e0f9c61d0e8797a8ed78781a4d9cebbc68284ce25d8acd0e4753f75050b9a Nov 24 08:53:45 crc kubenswrapper[4719]: W1124 08:53:45.026994 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-cae5370d1fc0a3713f7202ff228d50399fd026f5ce684c0cdfe40f6597c10799 WatchSource:0}: Error finding container cae5370d1fc0a3713f7202ff228d50399fd026f5ce684c0cdfe40f6597c10799: Status 404 returned error can't find the container with id cae5370d1fc0a3713f7202ff228d50399fd026f5ce684c0cdfe40f6597c10799 Nov 24 08:53:45 crc kubenswrapper[4719]: I1124 08:53:45.032380 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:53:45 crc kubenswrapper[4719]: I1124 08:53:45.042494 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:53:45 crc kubenswrapper[4719]: W1124 08:53:45.059850 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-6c421e0838833ab89416397b805df95d0d04b8b51a20a9abd949cffd5389ed93 WatchSource:0}: Error finding container 6c421e0838833ab89416397b805df95d0d04b8b51a20a9abd949cffd5389ed93: Status 404 returned error can't find the container with id 6c421e0838833ab89416397b805df95d0d04b8b51a20a9abd949cffd5389ed93 Nov 24 08:53:45 crc kubenswrapper[4719]: W1124 08:53:45.065025 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2dedb84a0875ca97a6f8713b33f1be192f2b2ab550af65a3e4cd853021a85c71 WatchSource:0}: Error finding container 2dedb84a0875ca97a6f8713b33f1be192f2b2ab550af65a3e4cd853021a85c71: Status 404 returned error can't find the container with id 2dedb84a0875ca97a6f8713b33f1be192f2b2ab550af65a3e4cd853021a85c71 Nov 24 08:53:45 crc kubenswrapper[4719]: E1124 08:53:45.068712 4719 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.26:6443: connect: connection refused" interval="800ms" Nov 24 08:53:45 crc kubenswrapper[4719]: I1124 08:53:45.313077 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:45 crc kubenswrapper[4719]: I1124 08:53:45.314711 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:45 crc kubenswrapper[4719]: I1124 08:53:45.314745 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:45 crc kubenswrapper[4719]: I1124 08:53:45.314755 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:45 crc kubenswrapper[4719]: I1124 08:53:45.314775 4719 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 08:53:45 crc kubenswrapper[4719]: E1124 08:53:45.315226 4719 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.26:6443: connect: connection refused" node="crc" Nov 24 08:53:45 crc kubenswrapper[4719]: W1124 08:53:45.344222 4719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:45 crc kubenswrapper[4719]: E1124 08:53:45.344323 4719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.26:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:53:45 crc kubenswrapper[4719]: W1124 08:53:45.387811 4719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:45 crc kubenswrapper[4719]: E1124 08:53:45.387913 4719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.26:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:53:45 crc kubenswrapper[4719]: I1124 08:53:45.465298 4719 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:45 crc kubenswrapper[4719]: W1124 08:53:45.484181 4719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:45 crc kubenswrapper[4719]: E1124 08:53:45.484286 4719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.26:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:53:45 crc kubenswrapper[4719]: I1124 08:53:45.523905 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"411cf81c56c3e8ce0fde4ffe292edd83d5c3caee722dc57f8ecbc6e97288289f"} Nov 24 08:53:45 crc kubenswrapper[4719]: I1124 08:53:45.524876 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2dedb84a0875ca97a6f8713b33f1be192f2b2ab550af65a3e4cd853021a85c71"} Nov 24 08:53:45 crc kubenswrapper[4719]: I1124 08:53:45.525686 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6c421e0838833ab89416397b805df95d0d04b8b51a20a9abd949cffd5389ed93"} Nov 24 08:53:45 crc kubenswrapper[4719]: I1124 08:53:45.526649 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cae5370d1fc0a3713f7202ff228d50399fd026f5ce684c0cdfe40f6597c10799"} Nov 24 08:53:45 crc kubenswrapper[4719]: I1124 08:53:45.527490 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6c6e0f9c61d0e8797a8ed78781a4d9cebbc68284ce25d8acd0e4753f75050b9a"} Nov 24 08:53:45 crc kubenswrapper[4719]: W1124 08:53:45.709450 4719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:45 crc kubenswrapper[4719]: E1124 08:53:45.709549 4719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.26:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:53:45 crc kubenswrapper[4719]: E1124 08:53:45.870557 4719 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.26:6443: connect: connection refused" interval="1.6s" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.115882 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.117321 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.117640 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.117652 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.117673 4719 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 08:53:46 crc kubenswrapper[4719]: E1124 08:53:46.117986 4719 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.26:6443: connect: connection refused" node="crc" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.410913 4719 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 24 08:53:46 crc kubenswrapper[4719]: E1124 08:53:46.412403 4719 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.26:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.465384 4719 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.535077 4719 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b" exitCode=0 Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.535175 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.535194 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b"} Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.536239 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.536291 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.536304 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.537448 4719 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e" exitCode=0 Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.537520 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.537496 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e"} Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.538417 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.538475 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.538488 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.541125 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.541119 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc"} Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.541173 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00"} Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.541194 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c"} Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.541278 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21"} Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.542244 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.542270 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.542280 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.543028 4719 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8" exitCode=0 Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.543082 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8"} Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.543127 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.544137 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.544171 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.544184 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.544425 4719 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="81203bd5dbafe3b5e5acab4f4ba0ce46d35265449b86b40a2d2f1ee24d71cf47" exitCode=0 Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.544452 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"81203bd5dbafe3b5e5acab4f4ba0ce46d35265449b86b40a2d2f1ee24d71cf47"} Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.544502 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.545227 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.545256 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.545268 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.546608 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.547634 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.547657 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:46 crc kubenswrapper[4719]: I1124 08:53:46.547689 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.465454 4719 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:47 crc kubenswrapper[4719]: E1124 08:53:47.472149 4719 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.26:6443: connect: connection refused" interval="3.2s" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.551317 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"79d36be386d91a3e7c09e9a675d0dff8ba8d3d11de2fec652d90b4c57eb9dd12"} Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.551374 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68"} Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.551387 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355"} Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.551397 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550"} Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.551408 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2"} Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.555140 4719 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7881726d77b0251100777df4c0d6f81a91925067d084a59913168dc14874c279" exitCode=0 Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.555300 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.555311 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7881726d77b0251100777df4c0d6f81a91925067d084a59913168dc14874c279"} Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.556506 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.556556 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.556573 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.560961 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1964a454309c33ecb8ec0042942827fc0c84ba793fdd83b77d70294adc7abbfc"} Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.561103 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.562475 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.562506 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.562518 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.564852 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.565392 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.565730 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b98d3e414bcadf07626d331c7e3c0d8db810268042f59cbaeda09e24832e245a"} Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.565768 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"21113afe48c6b1841f5739537f425c5e33b0ccc731715fda8a14323b8e5660fb"} Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.565782 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ca4f0b8f7a8ad7727714c244b8431083d624996f15030cea0b94f136aec1052e"} Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.566154 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.566179 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.566191 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.566789 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.566814 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.566826 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.718833 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.721342 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.721392 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.721404 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:47 crc kubenswrapper[4719]: I1124 08:53:47.721436 4719 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 08:53:47 crc kubenswrapper[4719]: E1124 08:53:47.721994 4719 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.26:6443: connect: connection refused" node="crc" Nov 24 08:53:48 crc kubenswrapper[4719]: W1124 08:53:48.019153 4719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:48 crc kubenswrapper[4719]: E1124 08:53:48.019242 4719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.26:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:53:48 crc kubenswrapper[4719]: E1124 08:53:48.158497 4719 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.26:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187ae564a3134433 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 08:53:44.464409651 +0000 UTC m=+0.795682923,LastTimestamp:2025-11-24 08:53:44.464409651 +0000 UTC m=+0.795682923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 08:53:48 crc kubenswrapper[4719]: W1124 08:53:48.345650 4719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:48 crc kubenswrapper[4719]: E1124 08:53:48.345759 4719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.26:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:53:48 crc kubenswrapper[4719]: W1124 08:53:48.382283 4719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:48 crc kubenswrapper[4719]: E1124 08:53:48.382376 4719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.26:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.465342 4719 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:48 crc kubenswrapper[4719]: W1124 08:53:48.496243 4719 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.26:6443: connect: connection refused Nov 24 08:53:48 crc kubenswrapper[4719]: E1124 08:53:48.496325 4719 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.26:6443: connect: connection refused" logger="UnhandledError" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.570200 4719 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ac9c0b24e2ceb8d90e7fe0e2ae69c9da5b1777736919e65e3a6cef8884189786" exitCode=0 Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.570294 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ac9c0b24e2ceb8d90e7fe0e2ae69c9da5b1777736919e65e3a6cef8884189786"} Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.570333 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.570362 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.570371 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.571417 4719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.571462 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.571980 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.572009 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.572019 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.572324 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.572364 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.572384 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.575768 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.575818 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.575831 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.576129 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.576177 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:48 crc kubenswrapper[4719]: I1124 08:53:48.576200 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.016637 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.574802 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.576675 4719 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="79d36be386d91a3e7c09e9a675d0dff8ba8d3d11de2fec652d90b4c57eb9dd12" exitCode=255 Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.576770 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"79d36be386d91a3e7c09e9a675d0dff8ba8d3d11de2fec652d90b4c57eb9dd12"} Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.576789 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.577741 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.577774 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.577785 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.578277 4719 scope.go:117] "RemoveContainer" containerID="79d36be386d91a3e7c09e9a675d0dff8ba8d3d11de2fec652d90b4c57eb9dd12" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.579398 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.579619 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.580703 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.580747 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.580757 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.584519 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"48d55e9206be5fe5cb0e557096229539729966134fae4410cf72ffc8008b95fd"} Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.584562 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cf3bcdeb2a0044e56d06d19317443a1b069a3b0bb0eab2de6603e641130c7731"} Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.584576 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f888642b6494a1422d2b25965671479cd88bebf84b127de8de8c726572f72c36"} Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.584589 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"972026082ecb29f92af5f30a5297fb1047125336f8145895c126530b3082b4d0"} Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.584602 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"21a0d5ae150d0637979620e27dc63a48e46db3b20b0e750cbcadd5b80defca29"} Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.584615 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.585045 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.585294 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.585322 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.585331 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:49 crc kubenswrapper[4719]: I1124 08:53:49.901548 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.136458 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.398185 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.398395 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.399659 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.399762 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.399785 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.483052 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.600902 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.602659 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761"} Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.602773 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.602793 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.602788 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.602904 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.604163 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.604198 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.604208 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.604235 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.604234 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.604215 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.604273 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.604286 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.604246 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.656189 4719 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.922502 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.923743 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.923780 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.923791 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:50 crc kubenswrapper[4719]: I1124 08:53:50.923815 4719 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.342445 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.604295 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.604343 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.604303 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.604453 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.605560 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.605594 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.605606 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.605610 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.605635 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.605645 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.605560 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.605708 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.605719 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:51 crc kubenswrapper[4719]: I1124 08:53:51.875859 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:53:52 crc kubenswrapper[4719]: I1124 08:53:52.607068 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:52 crc kubenswrapper[4719]: I1124 08:53:52.607091 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:52 crc kubenswrapper[4719]: I1124 08:53:52.608330 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:52 crc kubenswrapper[4719]: I1124 08:53:52.608391 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:52 crc kubenswrapper[4719]: I1124 08:53:52.608403 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:52 crc kubenswrapper[4719]: I1124 08:53:52.609218 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:52 crc kubenswrapper[4719]: I1124 08:53:52.609248 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:52 crc kubenswrapper[4719]: I1124 08:53:52.609275 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:52 crc kubenswrapper[4719]: I1124 08:53:52.874871 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:53:53 crc kubenswrapper[4719]: I1124 08:53:53.609477 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:53 crc kubenswrapper[4719]: I1124 08:53:53.610182 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:53 crc kubenswrapper[4719]: I1124 08:53:53.610688 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:53 crc kubenswrapper[4719]: I1124 08:53:53.610727 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:53 crc kubenswrapper[4719]: I1124 08:53:53.610754 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:53 crc kubenswrapper[4719]: I1124 08:53:53.610872 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:53 crc kubenswrapper[4719]: I1124 08:53:53.610901 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:53 crc kubenswrapper[4719]: I1124 08:53:53.610911 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:54 crc kubenswrapper[4719]: I1124 08:53:54.343117 4719 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 08:53:54 crc kubenswrapper[4719]: I1124 08:53:54.343200 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 08:53:54 crc kubenswrapper[4719]: E1124 08:53:54.616557 4719 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 08:53:56 crc kubenswrapper[4719]: I1124 08:53:56.052412 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:53:56 crc kubenswrapper[4719]: I1124 08:53:56.053196 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:53:56 crc kubenswrapper[4719]: I1124 08:53:56.054687 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:53:56 crc kubenswrapper[4719]: I1124 08:53:56.054725 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:53:56 crc kubenswrapper[4719]: I1124 08:53:56.054737 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:53:59 crc kubenswrapper[4719]: I1124 08:53:59.466672 4719 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 24 08:53:59 crc kubenswrapper[4719]: I1124 08:53:59.502095 4719 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 08:53:59 crc kubenswrapper[4719]: I1124 08:53:59.502244 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 08:53:59 crc kubenswrapper[4719]: I1124 08:53:59.902508 4719 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 08:53:59 crc kubenswrapper[4719]: I1124 08:53:59.902818 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 08:54:00 crc kubenswrapper[4719]: I1124 08:54:00.281821 4719 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 24 08:54:00 crc kubenswrapper[4719]: I1124 08:54:00.281898 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 08:54:00 crc kubenswrapper[4719]: I1124 08:54:00.288979 4719 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 24 08:54:00 crc kubenswrapper[4719]: I1124 08:54:00.289376 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 08:54:00 crc kubenswrapper[4719]: I1124 08:54:00.511604 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 24 08:54:00 crc kubenswrapper[4719]: I1124 08:54:00.511848 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:54:00 crc kubenswrapper[4719]: I1124 08:54:00.513521 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:00 crc kubenswrapper[4719]: I1124 08:54:00.513558 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:00 crc kubenswrapper[4719]: I1124 08:54:00.513569 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:00 crc kubenswrapper[4719]: I1124 08:54:00.527273 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 24 08:54:00 crc kubenswrapper[4719]: I1124 08:54:00.622982 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:54:00 crc kubenswrapper[4719]: I1124 08:54:00.624618 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:00 crc kubenswrapper[4719]: I1124 08:54:00.624665 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:00 crc kubenswrapper[4719]: I1124 08:54:00.624682 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:01 crc kubenswrapper[4719]: I1124 08:54:01.880443 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:54:01 crc kubenswrapper[4719]: I1124 08:54:01.880721 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:54:01 crc kubenswrapper[4719]: I1124 08:54:01.881794 4719 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 08:54:01 crc kubenswrapper[4719]: I1124 08:54:01.881863 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 08:54:01 crc kubenswrapper[4719]: I1124 08:54:01.881908 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:01 crc kubenswrapper[4719]: I1124 08:54:01.881947 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:01 crc kubenswrapper[4719]: I1124 08:54:01.881981 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:01 crc kubenswrapper[4719]: I1124 08:54:01.885835 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:54:02 crc kubenswrapper[4719]: I1124 08:54:02.629266 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:54:02 crc kubenswrapper[4719]: I1124 08:54:02.629725 4719 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 08:54:02 crc kubenswrapper[4719]: I1124 08:54:02.629795 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 08:54:02 crc kubenswrapper[4719]: I1124 08:54:02.630634 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:02 crc kubenswrapper[4719]: I1124 08:54:02.631337 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:02 crc kubenswrapper[4719]: I1124 08:54:02.631377 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:04 crc kubenswrapper[4719]: I1124 08:54:04.343261 4719 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 08:54:04 crc kubenswrapper[4719]: I1124 08:54:04.343429 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 08:54:04 crc kubenswrapper[4719]: E1124 08:54:04.616927 4719 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.269822 4719 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.288930 4719 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.289120 4719 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.289167 4719 trace.go:236] Trace[2127329508]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 08:53:51.570) (total time: 13719ms): Nov 24 08:54:05 crc kubenswrapper[4719]: Trace[2127329508]: ---"Objects listed" error: 13718ms (08:54:05.289) Nov 24 08:54:05 crc kubenswrapper[4719]: Trace[2127329508]: [13.719040848s] [13.719040848s] END Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.289183 4719 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.289697 4719 trace.go:236] Trace[868352643]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 08:53:52.805) (total time: 12483ms): Nov 24 08:54:05 crc kubenswrapper[4719]: Trace[868352643]: ---"Objects listed" error: 12483ms (08:54:05.289) Nov 24 08:54:05 crc kubenswrapper[4719]: Trace[868352643]: [12.483972198s] [12.483972198s] END Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.289712 4719 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.289908 4719 trace.go:236] Trace[1366590294]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 08:53:54.058) (total time: 11231ms): Nov 24 08:54:05 crc kubenswrapper[4719]: Trace[1366590294]: ---"Objects listed" error: 11231ms (08:54:05.289) Nov 24 08:54:05 crc kubenswrapper[4719]: Trace[1366590294]: [11.231278479s] [11.231278479s] END Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.289924 4719 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.300060 4719 trace.go:236] Trace[481516488]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 08:53:54.087) (total time: 11212ms): Nov 24 08:54:05 crc kubenswrapper[4719]: Trace[481516488]: ---"Objects listed" error: 11212ms (08:54:05.299) Nov 24 08:54:05 crc kubenswrapper[4719]: Trace[481516488]: [11.212486973s] [11.212486973s] END Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.300099 4719 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.313273 4719 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.321404 4719 csr.go:261] certificate signing request csr-t4cgq is approved, waiting to be issued Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.347751 4719 csr.go:257] certificate signing request csr-t4cgq is issued Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.464837 4719 apiserver.go:52] "Watching apiserver" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.468781 4719 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.469179 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.469603 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.469692 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.469746 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.470009 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.470050 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.470237 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.470276 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.470376 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.470392 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.486236 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.486508 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.492186 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.492927 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.497497 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.497612 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.497707 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.497851 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.497977 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.567772 4719 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591026 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591095 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591121 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591143 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591164 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591185 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591209 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591229 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591253 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591272 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591293 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591328 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591350 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591375 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591396 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591415 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591413 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591457 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591524 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591549 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591570 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591591 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591613 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591632 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591651 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591652 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591668 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591975 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.591670 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592096 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592131 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592156 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592184 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592209 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592216 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592235 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592256 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592263 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592332 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592362 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592405 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592412 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592476 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592485 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592501 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592526 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592549 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592571 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592594 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592617 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592638 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592660 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592688 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592683 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592716 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592742 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592770 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592773 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592815 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592845 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592865 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592899 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592923 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592937 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.592944 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593003 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593031 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593062 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593082 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593110 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593132 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593134 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593158 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593187 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593212 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593236 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593261 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593283 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593305 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593308 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593327 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593350 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593376 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593397 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593420 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593441 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593462 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593465 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593490 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593515 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593543 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593565 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593587 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593608 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593629 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593635 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593654 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593677 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593701 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593723 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593744 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593771 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593796 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593819 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593828 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593844 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593873 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593894 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593916 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593941 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593970 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.593991 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594011 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594053 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594081 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594104 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594127 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594152 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594174 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594197 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594218 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594238 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594261 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594283 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594306 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594334 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594354 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594375 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594399 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594422 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594443 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594465 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594487 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594508 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594530 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594550 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594572 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594593 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594615 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594636 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594657 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594681 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594705 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594727 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594752 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594775 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594798 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594822 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594843 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594867 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594890 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594913 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594935 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594958 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594986 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595008 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595030 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595076 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595099 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595122 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595147 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595171 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595193 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595219 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595245 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595268 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595292 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595316 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595339 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595363 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595385 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595492 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595517 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595542 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595572 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595595 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595620 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595642 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595667 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595690 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595724 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595751 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595779 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595805 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595827 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595853 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595875 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595898 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595922 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595947 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595971 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595997 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596076 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596100 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596120 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596144 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596169 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596192 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596215 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596280 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596382 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596408 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596453 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596482 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596504 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596526 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596551 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596572 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596595 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596618 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596642 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596665 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596688 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596744 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596772 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596802 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596828 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596852 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596881 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596907 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596930 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596960 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596991 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597016 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597593 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597635 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597662 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597743 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597759 4719 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597773 4719 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597785 4719 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597799 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597812 4719 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597824 4719 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597837 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597853 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597864 4719 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597877 4719 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597890 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597904 4719 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597918 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597931 4719 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597944 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597957 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597970 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.598838 4719 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.675996 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594051 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594059 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594125 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594240 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594247 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594306 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594428 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594461 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594674 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594700 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594843 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594911 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.594997 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595117 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595188 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595301 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595363 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595525 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595694 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595754 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595919 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.595939 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596395 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596759 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596848 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.596945 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597210 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597519 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.597782 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.598022 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.598310 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.598641 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.598687 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.598836 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.598952 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.599028 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.599095 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.611395 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.612157 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.612220 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.612342 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.612362 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.612476 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:54:06.112452538 +0000 UTC m=+22.443725790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.678656 4719 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.678734 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:06.17871035 +0000 UTC m=+22.509983602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.678766 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.679045 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.679373 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.679410 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.679634 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.679730 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.679816 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.679955 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.679997 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.612673 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.615107 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.615422 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.615888 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.616475 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.616745 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.619593 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.619882 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.620063 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.620245 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.621165 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.621560 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.621709 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.640846 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.641191 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.643274 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.651224 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.657195 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.657524 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.657729 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.662473 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.662780 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.663151 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.673733 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.673889 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.674455 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.674478 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.674599 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.674804 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.674883 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.675427 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.675676 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.676405 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.677571 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.677600 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.677758 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.677921 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.677987 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.680257 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.680596 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.683126 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.683762 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.684232 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.684367 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.684565 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.684717 4719 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.684779 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:06.184760736 +0000 UTC m=+22.516034058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.690627 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.691488 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.691607 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.691719 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.692253 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.692377 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.692517 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.692733 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.692804 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.692937 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.692982 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.693181 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.693238 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.693409 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.693694 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.693826 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.693990 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.694430 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.694619 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.694934 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.695503 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.696115 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.696227 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.696513 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.696632 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.696805 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.696991 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.697071 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.697320 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.697862 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.698641 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.704701 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.705143 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.705529 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.705663 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.705872 4719 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.705966 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.706072 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.706131 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.706223 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.706304 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.706381 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.706464 4719 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.706543 4719 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.706621 4719 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.706704 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.706785 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.706863 4719 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.706943 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707022 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707145 4719 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707219 4719 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707304 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707390 4719 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707472 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707542 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707613 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707701 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707780 4719 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707859 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707942 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.708020 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.708118 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.708197 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.708277 4719 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.708361 4719 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.708458 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.708542 4719 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.708618 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.708694 4719 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.708772 4719 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.708855 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.708936 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.709008 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.709095 4719 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.709171 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.709250 4719 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.709331 4719 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.709404 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.709486 4719 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.709559 4719 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.709633 4719 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.709713 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.709795 4719 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.709875 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.709947 4719 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.710025 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.710124 4719 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.710202 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.710283 4719 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.710359 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.710431 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.710502 4719 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.710578 4719 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.710661 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.710739 4719 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.710818 4719 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.710892 4719 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.710972 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.711112 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.711205 4719 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.711286 4719 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.711358 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.711428 4719 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.711503 4719 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721070 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721162 4719 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721184 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721196 4719 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721207 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721221 4719 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721231 4719 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721242 4719 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721251 4719 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721265 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721275 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721285 4719 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721296 4719 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721308 4719 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721318 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721328 4719 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721340 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721350 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721359 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721371 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721383 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721392 4719 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721401 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721410 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721421 4719 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721433 4719 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721499 4719 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721520 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721538 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721553 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721565 4719 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721578 4719 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721590 4719 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721601 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721611 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721622 4719 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721633 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721642 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721653 4719 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721664 4719 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721673 4719 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721682 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721693 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721704 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721713 4719 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721724 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721735 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721744 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721753 4719 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721763 4719 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721774 4719 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721783 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721792 4719 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721802 4719 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721813 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.706472 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.706864 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707256 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707648 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.708112 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.713999 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.714369 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.714615 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.714827 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.715054 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.707789 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.717089 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.717109 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.720634 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.721504 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.722297 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.722572 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.722752 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.722928 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.723114 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.723282 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.728509 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.729233 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.729547 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.729805 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.729983 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.612538 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.730082 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.730125 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.730334 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.730407 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.730932 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.731280 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.731483 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.731555 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.731695 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.731651 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.731806 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.731920 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.732001 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.732239 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.732926 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.732983 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.739439 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.741319 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.741338 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.741350 4719 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.741430 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:06.24140398 +0000 UTC m=+22.572677232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.750439 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.751357 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.751391 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.751403 4719 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:05 crc kubenswrapper[4719]: E1124 08:54:05.751462 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:06.251443831 +0000 UTC m=+22.582717083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.751919 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.773239 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.773342 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.773379 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.773402 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.775437 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.776849 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.779285 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.787214 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.788268 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.791164 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.791844 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.793826 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.793815 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.800569 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 08:54:05 crc kubenswrapper[4719]: W1124 08:54:05.809888 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c643b438626cbdfc88a42339d368d8b8204d975390732b54a2d09a183f3c4f62 WatchSource:0}: Error finding container c643b438626cbdfc88a42339d368d8b8204d975390732b54a2d09a183f3c4f62: Status 404 returned error can't find the container with id c643b438626cbdfc88a42339d368d8b8204d975390732b54a2d09a183f3c4f62 Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823225 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823298 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823316 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823325 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823335 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823344 4719 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823354 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823363 4719 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823372 4719 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823382 4719 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823390 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823399 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823409 4719 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823419 4719 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823427 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823436 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823444 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823451 4719 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823459 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823468 4719 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823476 4719 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823485 4719 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823492 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823501 4719 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823509 4719 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823517 4719 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823525 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823533 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823541 4719 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823549 4719 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823557 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823566 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823573 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823580 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823588 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823596 4719 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823604 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823613 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823621 4719 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823628 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823636 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823644 4719 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823651 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823659 4719 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823666 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823676 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823684 4719 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823692 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823700 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823708 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823716 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823724 4719 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.823733 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.841495 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.857461 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.875663 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:54:05 crc kubenswrapper[4719]: I1124 08:54:05.893781 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.125885 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:54:06 crc kubenswrapper[4719]: E1124 08:54:06.126088 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:54:07.126059972 +0000 UTC m=+23.457333224 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.227372 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.227432 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:06 crc kubenswrapper[4719]: E1124 08:54:06.227555 4719 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:54:06 crc kubenswrapper[4719]: E1124 08:54:06.227585 4719 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:54:06 crc kubenswrapper[4719]: E1124 08:54:06.227627 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:07.227606978 +0000 UTC m=+23.558880230 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:54:06 crc kubenswrapper[4719]: E1124 08:54:06.227671 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:07.22764903 +0000 UTC m=+23.558922282 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:54:06 crc kubenswrapper[4719]: E1124 08:54:06.328198 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:54:06 crc kubenswrapper[4719]: E1124 08:54:06.328234 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:54:06 crc kubenswrapper[4719]: E1124 08:54:06.328245 4719 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.328622 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:06 crc kubenswrapper[4719]: E1124 08:54:06.328730 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:07.328716482 +0000 UTC m=+23.659989734 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.328767 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:06 crc kubenswrapper[4719]: E1124 08:54:06.328865 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:54:06 crc kubenswrapper[4719]: E1124 08:54:06.328881 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:54:06 crc kubenswrapper[4719]: E1124 08:54:06.328889 4719 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:06 crc kubenswrapper[4719]: E1124 08:54:06.328925 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:07.328918108 +0000 UTC m=+23.660191350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.348867 4719 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-24 08:49:05 +0000 UTC, rotation deadline is 2026-10-10 23:49:54.511477069 +0000 UTC Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.348916 4719 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7694h55m48.162564101s for next certificate rotation Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.524051 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.525118 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.525844 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.526480 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.527070 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.527612 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.528320 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.528941 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.529692 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.530326 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.530888 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.531661 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.532247 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.532804 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.534232 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.534931 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.535844 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.536373 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.537102 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.537804 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.538400 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.539068 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.539598 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.540381 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.540864 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.541691 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.542811 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.543435 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.544145 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.544697 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.549126 4719 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.549264 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.551739 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.553076 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.553588 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.555587 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.556646 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.558174 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.559000 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.560367 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.560935 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.562159 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.563328 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.564211 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.564855 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.566089 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.567377 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.568167 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.568808 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.569927 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.570533 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.571843 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.572687 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.573600 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.691750 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a"} Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.691807 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27"} Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.691824 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fe8928e3ca6725a4fd98c94cd83faa098192db5279ef3bf6bdf88918e3d6f112"} Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.698803 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.699271 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.700698 4719 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761" exitCode=255 Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.700773 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761"} Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.700860 4719 scope.go:117] "RemoveContainer" containerID="79d36be386d91a3e7c09e9a675d0dff8ba8d3d11de2fec652d90b4c57eb9dd12" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.702620 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f689f4b218a107fa46330b5c2e505176ef583bc29aec3b1dfba1545bca57bc17"} Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.703710 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629"} Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.703739 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c643b438626cbdfc88a42339d368d8b8204d975390732b54a2d09a183f3c4f62"} Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.737220 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.755453 4719 scope.go:117] "RemoveContainer" containerID="60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761" Nov 24 08:54:06 crc kubenswrapper[4719]: E1124 08:54:06.755662 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.756104 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.832538 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.858042 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.878437 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.927544 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:06Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:06 crc kubenswrapper[4719]: I1124 08:54:06.974399 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:06Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.068275 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d36be386d91a3e7c09e9a675d0dff8ba8d3d11de2fec652d90b4c57eb9dd12\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:53:48Z\\\",\\\"message\\\":\\\"W1124 08:53:47.876659 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 08:53:47.877136 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763974427 cert, and key in /tmp/serving-cert-2092127983/serving-signer.crt, /tmp/serving-cert-2092127983/serving-signer.key\\\\nI1124 08:53:48.247416 1 observer_polling.go:159] Starting file observer\\\\nW1124 08:53:48.253505 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 08:53:48.253671 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:53:48.255654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2092127983/tls.crt::/tmp/serving-cert-2092127983/tls.key\\\\\\\"\\\\nF1124 08:53:48.646113 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.132316 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.138587 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.138760 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:54:09.138743258 +0000 UTC m=+25.470016510 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.175912 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.221355 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.239655 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.239717 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.239812 4719 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.239865 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:09.239850402 +0000 UTC m=+25.571123654 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.240309 4719 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.240355 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:09.240346976 +0000 UTC m=+25.571620228 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.257870 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.277871 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.295989 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.340796 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.340859 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.340970 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.340985 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.340997 4719 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.341044 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:09.341025908 +0000 UTC m=+25.672299160 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.341402 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.341426 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.341436 4719 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.341466 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:09.3414558 +0000 UTC m=+25.672729052 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.365644 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hnkb6"] Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.366009 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.373192 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-9d2g8"] Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.373747 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.375801 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.376441 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.376640 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.376881 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.376912 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.377055 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.377139 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.377255 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.377742 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-hkbjt"] Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.378036 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-v8ghd"] Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.378223 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fvqzq"] Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.378318 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hkbjt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.378620 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.378687 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.379425 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.379455 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.390890 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.391332 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.391488 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.391619 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.391658 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.392138 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.392156 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.392292 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.392367 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.392472 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.392509 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.392642 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.409763 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.423186 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.437157 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442143 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-env-overrides\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442184 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-cni-binary-copy\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442202 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-run-netns\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442218 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-cnibin\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442234 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-os-release\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442249 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-ovn\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442331 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/169d1eb7-ec71-4b89-95a5-980102c3e0f6-hosts-file\") pod \"node-resolver-hkbjt\" (UID: \"169d1eb7-ec71-4b89-95a5-980102c3e0f6\") " pod="openshift-dns/node-resolver-hkbjt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442382 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-systemd\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442412 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-cni-bin\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442434 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442485 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e9122c9-57ef-4b8f-92a8-593533891255-cni-binary-copy\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442508 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-var-lib-kubelet\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442530 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e9122c9-57ef-4b8f-92a8-593533891255-multus-daemon-config\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442551 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-var-lib-cni-bin\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442573 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-multus-cni-dir\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442595 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jz9h\" (UniqueName: \"kubernetes.io/projected/1e9122c9-57ef-4b8f-92a8-593533891255-kube-api-access-5jz9h\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442621 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-cni-netd\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442643 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-system-cni-dir\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442667 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-multus-socket-dir-parent\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442690 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-openvswitch\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442715 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-systemd-units\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442736 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe015f89-bb6b-4fa1-b687-192013956ed6-mcd-auth-proxy-config\") pod \"machine-config-daemon-hnkb6\" (UID: \"fe015f89-bb6b-4fa1-b687-192013956ed6\") " pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442759 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsg9w\" (UniqueName: \"kubernetes.io/projected/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-kube-api-access-bsg9w\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442784 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-hostroot\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442808 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-etc-kubernetes\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442828 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442852 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7xp6\" (UniqueName: \"kubernetes.io/projected/76442e88-72e2-4a86-99b4-bd07f0490aa9-kube-api-access-f7xp6\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442869 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe015f89-bb6b-4fa1-b687-192013956ed6-proxy-tls\") pod \"machine-config-daemon-hnkb6\" (UID: \"fe015f89-bb6b-4fa1-b687-192013956ed6\") " pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442895 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-run-netns\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442910 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovnkube-script-lib\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442927 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5smhw\" (UniqueName: \"kubernetes.io/projected/fe015f89-bb6b-4fa1-b687-192013956ed6-kube-api-access-5smhw\") pod \"machine-config-daemon-hnkb6\" (UID: \"fe015f89-bb6b-4fa1-b687-192013956ed6\") " pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442944 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6vfb\" (UniqueName: \"kubernetes.io/projected/169d1eb7-ec71-4b89-95a5-980102c3e0f6-kube-api-access-g6vfb\") pod \"node-resolver-hkbjt\" (UID: \"169d1eb7-ec71-4b89-95a5-980102c3e0f6\") " pod="openshift-dns/node-resolver-hkbjt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442971 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-cnibin\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.442986 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-run-multus-certs\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443002 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-log-socket\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443017 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-slash\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443042 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-run-k8s-cni-cncf-io\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443060 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-kubelet\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443107 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-node-log\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443133 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovn-node-metrics-cert\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443153 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-run-ovn-kubernetes\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443179 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-var-lib-cni-multus\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443195 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-multus-conf-dir\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443210 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-etc-openvswitch\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443225 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-var-lib-openvswitch\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443239 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovnkube-config\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443254 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fe015f89-bb6b-4fa1-b687-192013956ed6-rootfs\") pod \"machine-config-daemon-hnkb6\" (UID: \"fe015f89-bb6b-4fa1-b687-192013956ed6\") " pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443267 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443283 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-system-cni-dir\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.443297 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-os-release\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.450845 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.465253 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d36be386d91a3e7c09e9a675d0dff8ba8d3d11de2fec652d90b4c57eb9dd12\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:53:48Z\\\",\\\"message\\\":\\\"W1124 08:53:47.876659 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 08:53:47.877136 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763974427 cert, and key in /tmp/serving-cert-2092127983/serving-signer.crt, /tmp/serving-cert-2092127983/serving-signer.key\\\\nI1124 08:53:48.247416 1 observer_polling.go:159] Starting file observer\\\\nW1124 08:53:48.253505 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 08:53:48.253671 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:53:48.255654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2092127983/tls.crt::/tmp/serving-cert-2092127983/tls.key\\\\\\\"\\\\nF1124 08:53:48.646113 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.480272 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.493845 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.504808 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.517477 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.519970 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.520147 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.520310 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.520345 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.520459 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.520536 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.536339 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.544232 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-hostroot\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.544325 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-hostroot\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.544565 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-etc-kubernetes\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.544477 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-etc-kubernetes\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.544741 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.544817 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.544910 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe015f89-bb6b-4fa1-b687-192013956ed6-proxy-tls\") pod \"machine-config-daemon-hnkb6\" (UID: \"fe015f89-bb6b-4fa1-b687-192013956ed6\") " pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545019 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-run-netns\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545163 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovnkube-script-lib\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545267 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7xp6\" (UniqueName: \"kubernetes.io/projected/76442e88-72e2-4a86-99b4-bd07f0490aa9-kube-api-access-f7xp6\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545359 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6vfb\" (UniqueName: \"kubernetes.io/projected/169d1eb7-ec71-4b89-95a5-980102c3e0f6-kube-api-access-g6vfb\") pod \"node-resolver-hkbjt\" (UID: \"169d1eb7-ec71-4b89-95a5-980102c3e0f6\") " pod="openshift-dns/node-resolver-hkbjt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545458 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-cnibin\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545089 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-run-netns\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545584 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-cnibin\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545647 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-run-multus-certs\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545707 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-log-socket\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545735 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5smhw\" (UniqueName: \"kubernetes.io/projected/fe015f89-bb6b-4fa1-b687-192013956ed6-kube-api-access-5smhw\") pod \"machine-config-daemon-hnkb6\" (UID: \"fe015f89-bb6b-4fa1-b687-192013956ed6\") " pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545768 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-run-k8s-cni-cncf-io\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545774 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-log-socket\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545790 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-kubelet\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545813 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-slash\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545835 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-node-log\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545856 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovn-node-metrics-cert\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545881 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-var-lib-cni-multus\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545903 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-multus-conf-dir\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545927 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-etc-openvswitch\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545952 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-run-ovn-kubernetes\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545957 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-node-log\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545965 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-kubelet\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545976 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-var-lib-openvswitch\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545996 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-slash\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545998 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-run-k8s-cni-cncf-io\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545997 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovnkube-config\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546028 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-etc-openvswitch\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546030 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fe015f89-bb6b-4fa1-b687-192013956ed6-rootfs\") pod \"machine-config-daemon-hnkb6\" (UID: \"fe015f89-bb6b-4fa1-b687-192013956ed6\") " pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546077 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546103 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-system-cni-dir\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546124 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-os-release\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546153 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-env-overrides\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546173 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-cni-binary-copy\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546195 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-run-netns\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546215 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-cnibin\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546237 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-os-release\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546255 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-ovn\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546299 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/169d1eb7-ec71-4b89-95a5-980102c3e0f6-hosts-file\") pod \"node-resolver-hkbjt\" (UID: \"169d1eb7-ec71-4b89-95a5-980102c3e0f6\") " pod="openshift-dns/node-resolver-hkbjt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546321 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-cni-bin\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546344 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546376 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e9122c9-57ef-4b8f-92a8-593533891255-cni-binary-copy\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.545929 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovnkube-script-lib\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546400 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-var-lib-kubelet\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546422 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e9122c9-57ef-4b8f-92a8-593533891255-multus-daemon-config\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546439 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-system-cni-dir\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546475 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fe015f89-bb6b-4fa1-b687-192013956ed6-rootfs\") pod \"machine-config-daemon-hnkb6\" (UID: \"fe015f89-bb6b-4fa1-b687-192013956ed6\") " pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546474 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-systemd\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546512 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-var-lib-cni-bin\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546532 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-multus-cni-dir\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546552 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jz9h\" (UniqueName: \"kubernetes.io/projected/1e9122c9-57ef-4b8f-92a8-593533891255-kube-api-access-5jz9h\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546566 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovnkube-config\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546569 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-cni-netd\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546594 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-cni-netd\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546613 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-system-cni-dir\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546619 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-var-lib-cni-bin\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546512 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-systemd\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546637 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-multus-socket-dir-parent\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546658 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-openvswitch\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546668 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-multus-cni-dir\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546682 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-systemd-units\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546702 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe015f89-bb6b-4fa1-b687-192013956ed6-mcd-auth-proxy-config\") pod \"machine-config-daemon-hnkb6\" (UID: \"fe015f89-bb6b-4fa1-b687-192013956ed6\") " pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546725 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsg9w\" (UniqueName: \"kubernetes.io/projected/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-kube-api-access-bsg9w\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546814 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-var-lib-cni-multus\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546842 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-multus-conf-dir\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546876 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-multus-socket-dir-parent\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546900 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-var-lib-openvswitch\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546922 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-system-cni-dir\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546941 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-openvswitch\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546951 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/169d1eb7-ec71-4b89-95a5-980102c3e0f6-hosts-file\") pod \"node-resolver-hkbjt\" (UID: \"169d1eb7-ec71-4b89-95a5-980102c3e0f6\") " pod="openshift-dns/node-resolver-hkbjt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546975 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-run-netns\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546979 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-systemd-units\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.547056 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-cnibin\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.547119 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e9122c9-57ef-4b8f-92a8-593533891255-multus-daemon-config\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.547161 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-cni-bin\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.547212 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-os-release\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.547241 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-ovn\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.547277 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-os-release\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.547325 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.547337 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-cni-binary-copy\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.547372 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-var-lib-kubelet\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.546399 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-run-ovn-kubernetes\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.547801 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe015f89-bb6b-4fa1-b687-192013956ed6-mcd-auth-proxy-config\") pod \"machine-config-daemon-hnkb6\" (UID: \"fe015f89-bb6b-4fa1-b687-192013956ed6\") " pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.547835 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-env-overrides\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.547976 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.548065 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e9122c9-57ef-4b8f-92a8-593533891255-cni-binary-copy\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.549109 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e9122c9-57ef-4b8f-92a8-593533891255-host-run-multus-certs\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.550889 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fe015f89-bb6b-4fa1-b687-192013956ed6-proxy-tls\") pod \"machine-config-daemon-hnkb6\" (UID: \"fe015f89-bb6b-4fa1-b687-192013956ed6\") " pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.551239 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovn-node-metrics-cert\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.555714 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.569350 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jz9h\" (UniqueName: \"kubernetes.io/projected/1e9122c9-57ef-4b8f-92a8-593533891255-kube-api-access-5jz9h\") pod \"multus-v8ghd\" (UID: \"1e9122c9-57ef-4b8f-92a8-593533891255\") " pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.574590 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7xp6\" (UniqueName: \"kubernetes.io/projected/76442e88-72e2-4a86-99b4-bd07f0490aa9-kube-api-access-f7xp6\") pod \"ovnkube-node-fvqzq\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.575017 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.578683 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6vfb\" (UniqueName: \"kubernetes.io/projected/169d1eb7-ec71-4b89-95a5-980102c3e0f6-kube-api-access-g6vfb\") pod \"node-resolver-hkbjt\" (UID: \"169d1eb7-ec71-4b89-95a5-980102c3e0f6\") " pod="openshift-dns/node-resolver-hkbjt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.582611 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5smhw\" (UniqueName: \"kubernetes.io/projected/fe015f89-bb6b-4fa1-b687-192013956ed6-kube-api-access-5smhw\") pod \"machine-config-daemon-hnkb6\" (UID: \"fe015f89-bb6b-4fa1-b687-192013956ed6\") " pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.584832 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsg9w\" (UniqueName: \"kubernetes.io/projected/c0b9662b-e98a-4933-8790-0dc5dc9f27b7-kube-api-access-bsg9w\") pod \"multus-additional-cni-plugins-9d2g8\" (UID: \"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\") " pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.599230 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.613070 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.624831 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.637669 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.650990 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.663246 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.680242 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.682469 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.691352 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" Nov 24 08:54:07 crc kubenswrapper[4719]: W1124 08:54:07.701052 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0b9662b_e98a_4933_8790_0dc5dc9f27b7.slice/crio-2a951e094a8838224419281bd1167c34b3fe473077638003189bdc6db5e92777 WatchSource:0}: Error finding container 2a951e094a8838224419281bd1167c34b3fe473077638003189bdc6db5e92777: Status 404 returned error can't find the container with id 2a951e094a8838224419281bd1167c34b3fe473077638003189bdc6db5e92777 Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.702079 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hkbjt" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.703098 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d36be386d91a3e7c09e9a675d0dff8ba8d3d11de2fec652d90b4c57eb9dd12\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:53:48Z\\\",\\\"message\\\":\\\"W1124 08:53:47.876659 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 08:53:47.877136 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763974427 cert, and key in /tmp/serving-cert-2092127983/serving-signer.crt, /tmp/serving-cert-2092127983/serving-signer.key\\\\nI1124 08:53:48.247416 1 observer_polling.go:159] Starting file observer\\\\nW1124 08:53:48.253505 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 08:53:48.253671 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:53:48.255654 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2092127983/tls.crt::/tmp/serving-cert-2092127983/tls.key\\\\\\\"\\\\nF1124 08:53:48.646113 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.711823 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-v8ghd" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.713532 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.717138 4719 scope.go:117] "RemoveContainer" containerID="60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761" Nov 24 08:54:07 crc kubenswrapper[4719]: E1124 08:54:07.717271 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.718471 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.718514 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"93485c18fdeed375301624431733ba1081ded13c030d75e4e288a63854fb8f6a"} Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.724378 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" event={"ID":"c0b9662b-e98a-4933-8790-0dc5dc9f27b7","Type":"ContainerStarted","Data":"2a951e094a8838224419281bd1167c34b3fe473077638003189bdc6db5e92777"} Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.735504 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.757328 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.790857 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.806192 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.836393 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.853243 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.869430 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.892159 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.909463 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.927803 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.943040 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:07 crc kubenswrapper[4719]: I1124 08:54:07.965999 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:07Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.733191 4719 generic.go:334] "Generic (PLEG): container finished" podID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerID="b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1" exitCode=0 Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.733248 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerDied","Data":"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1"} Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.733278 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerStarted","Data":"0ca030c9bc4e3269409339d8dc9218eb016fcb0bc34e23ccdb7db116c20d3eee"} Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.736787 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f"} Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.738217 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v8ghd" event={"ID":"1e9122c9-57ef-4b8f-92a8-593533891255","Type":"ContainerStarted","Data":"89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452"} Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.738239 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v8ghd" event={"ID":"1e9122c9-57ef-4b8f-92a8-593533891255","Type":"ContainerStarted","Data":"5af28a92e8b50f8e808c40158e052cad21d9afcc1b24b315135a0c9b9e06a7f3"} Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.739453 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hkbjt" event={"ID":"169d1eb7-ec71-4b89-95a5-980102c3e0f6","Type":"ContainerStarted","Data":"c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8"} Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.739481 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hkbjt" event={"ID":"169d1eb7-ec71-4b89-95a5-980102c3e0f6","Type":"ContainerStarted","Data":"953cfb881564a0f62fabeb5f8d20ff7a115df49b3e55870d06aed9d7ff6fa2b5"} Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.741743 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" event={"ID":"c0b9662b-e98a-4933-8790-0dc5dc9f27b7","Type":"ContainerStarted","Data":"690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811"} Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.743317 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b"} Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.743355 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c"} Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.761232 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:08Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.777794 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:08Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.792487 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:08Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.808609 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:08Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.822336 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:08Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.839918 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:08Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.855000 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:08Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.892596 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:08Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.929746 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:08Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.951605 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:08Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.973749 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:08Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:08 crc kubenswrapper[4719]: I1124 08:54:08.994029 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:08Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.015571 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.036558 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.065626 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.094497 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.114688 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.136623 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.151080 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.165311 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.173317 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.173499 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:54:13.173475743 +0000 UTC m=+29.504748995 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.183690 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.209189 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.230175 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.258010 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.276659 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.276708 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.276810 4719 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.276860 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:13.276845692 +0000 UTC m=+29.608118944 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.277119 4719 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.277259 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:13.277238764 +0000 UTC m=+29.608512016 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.377919 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.377973 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.378121 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.378137 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.378148 4719 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.378193 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:13.378178553 +0000 UTC m=+29.709451805 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.378216 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.378247 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.378259 4719 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.378316 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:13.378299737 +0000 UTC m=+29.709572989 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.501072 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.501920 4719 scope.go:117] "RemoveContainer" containerID="60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761" Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.502150 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.520199 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.520257 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.520268 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.520342 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.520398 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:09 crc kubenswrapper[4719]: E1124 08:54:09.520458 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.748578 4719 generic.go:334] "Generic (PLEG): container finished" podID="c0b9662b-e98a-4933-8790-0dc5dc9f27b7" containerID="690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811" exitCode=0 Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.748652 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" event={"ID":"c0b9662b-e98a-4933-8790-0dc5dc9f27b7","Type":"ContainerDied","Data":"690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811"} Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.753944 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerStarted","Data":"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec"} Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.753977 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerStarted","Data":"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d"} Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.753990 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerStarted","Data":"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e"} Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.753998 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerStarted","Data":"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6"} Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.754008 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerStarted","Data":"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd"} Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.754018 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerStarted","Data":"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4"} Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.765277 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.791600 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.805514 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.822401 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.834683 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.852368 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.876411 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.897444 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.921520 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.935940 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.956774 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:09 crc kubenswrapper[4719]: I1124 08:54:09.972236 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:09Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:10 crc kubenswrapper[4719]: I1124 08:54:10.763606 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" event={"ID":"c0b9662b-e98a-4933-8790-0dc5dc9f27b7","Type":"ContainerStarted","Data":"75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0"} Nov 24 08:54:10 crc kubenswrapper[4719]: I1124 08:54:10.795598 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:10Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:10 crc kubenswrapper[4719]: I1124 08:54:10.823940 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:10Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:10 crc kubenswrapper[4719]: I1124 08:54:10.840560 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:10Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:10 crc kubenswrapper[4719]: I1124 08:54:10.856737 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:10Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:10 crc kubenswrapper[4719]: I1124 08:54:10.872336 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:10Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:10 crc kubenswrapper[4719]: I1124 08:54:10.888838 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:10Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:10 crc kubenswrapper[4719]: I1124 08:54:10.905379 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:10Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:10 crc kubenswrapper[4719]: I1124 08:54:10.929251 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:10Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:10 crc kubenswrapper[4719]: I1124 08:54:10.970209 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:10Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:10 crc kubenswrapper[4719]: I1124 08:54:10.995939 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:10Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.030266 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.063462 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.354217 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.361843 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.367388 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.372550 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.396173 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.410580 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.423806 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.436618 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.448963 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.459996 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.484318 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.497638 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.510415 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.519926 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.519908 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.520071 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:11 crc kubenswrapper[4719]: E1124 08:54:11.520142 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:11 crc kubenswrapper[4719]: E1124 08:54:11.520174 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:11 crc kubenswrapper[4719]: E1124 08:54:11.520230 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.526833 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.545320 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.567411 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.584066 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.596799 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.610931 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.632545 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.648072 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.672790 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.690103 4719 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.692006 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.692043 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.692142 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.692340 4719 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.698134 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.701873 4719 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.702206 4719 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.703397 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.703426 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.703437 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.703454 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.703465 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:11Z","lastTransitionTime":"2025-11-24T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.720203 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: E1124 08:54:11.728160 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.735067 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.735179 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.735247 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.735298 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.735315 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:11Z","lastTransitionTime":"2025-11-24T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.746790 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: E1124 08:54:11.749690 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.756117 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.756153 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.756164 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.756181 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.756193 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:11Z","lastTransitionTime":"2025-11-24T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.763818 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.770946 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerStarted","Data":"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d"} Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.773432 4719 generic.go:334] "Generic (PLEG): container finished" podID="c0b9662b-e98a-4933-8790-0dc5dc9f27b7" containerID="75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0" exitCode=0 Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.773466 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" event={"ID":"c0b9662b-e98a-4933-8790-0dc5dc9f27b7","Type":"ContainerDied","Data":"75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0"} Nov 24 08:54:11 crc kubenswrapper[4719]: E1124 08:54:11.779535 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.779857 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.783915 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.783962 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.783978 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.783995 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.784007 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:11Z","lastTransitionTime":"2025-11-24T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.795131 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: E1124 08:54:11.803371 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.807972 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.808007 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.808016 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.808031 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.808057 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:11Z","lastTransitionTime":"2025-11-24T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.815838 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: E1124 08:54:11.820320 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: E1124 08:54:11.820473 4719 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.823810 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.823841 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.823850 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.823866 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.823876 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:11Z","lastTransitionTime":"2025-11-24T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.835543 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.856429 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.870435 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.886568 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.909193 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.923215 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.926664 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.926703 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.926716 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.926732 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.926744 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:11Z","lastTransitionTime":"2025-11-24T08:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.940012 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.956710 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.975091 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:11 crc kubenswrapper[4719]: I1124 08:54:11.996743 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:11Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.013800 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.028267 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.028625 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.028662 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.028671 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.028686 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.028695 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:12Z","lastTransitionTime":"2025-11-24T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.132393 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.132436 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.132445 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.132460 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.132470 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:12Z","lastTransitionTime":"2025-11-24T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.236094 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.236135 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.236144 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.236159 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.236169 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:12Z","lastTransitionTime":"2025-11-24T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.338337 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.338370 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.338381 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.338397 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.338409 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:12Z","lastTransitionTime":"2025-11-24T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.441074 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.441102 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.441111 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.441124 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.441134 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:12Z","lastTransitionTime":"2025-11-24T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.542933 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.542973 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.542985 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.543002 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.543013 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:12Z","lastTransitionTime":"2025-11-24T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.646195 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.646264 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.646276 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.646291 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.646303 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:12Z","lastTransitionTime":"2025-11-24T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.749096 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.749167 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.749180 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.749202 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.749214 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:12Z","lastTransitionTime":"2025-11-24T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.782627 4719 generic.go:334] "Generic (PLEG): container finished" podID="c0b9662b-e98a-4933-8790-0dc5dc9f27b7" containerID="79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28" exitCode=0 Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.782683 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" event={"ID":"c0b9662b-e98a-4933-8790-0dc5dc9f27b7","Type":"ContainerDied","Data":"79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28"} Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.798916 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.814102 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.828950 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.847296 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.852473 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.852502 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.852515 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.852531 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.852540 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:12Z","lastTransitionTime":"2025-11-24T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.863498 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.881443 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.899503 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.928017 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.947586 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.963859 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.963925 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.963939 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.963958 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.963978 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:12Z","lastTransitionTime":"2025-11-24T08:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.966252 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:12 crc kubenswrapper[4719]: I1124 08:54:12.987659 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:12Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.004777 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.019004 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.066387 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.066783 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.066799 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.066818 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.066831 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:13Z","lastTransitionTime":"2025-11-24T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.169587 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.169640 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.169649 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.169664 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.169674 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:13Z","lastTransitionTime":"2025-11-24T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.227187 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.227359 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:54:21.227340759 +0000 UTC m=+37.558614011 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.272846 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.272883 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.272894 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.272909 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.272921 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:13Z","lastTransitionTime":"2025-11-24T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.280132 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-2tjfc"] Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.280548 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2tjfc" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.282596 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.282643 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.282769 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.282998 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.294723 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.318457 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.328196 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.328251 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.328355 4719 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.328422 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:21.328405002 +0000 UTC m=+37.659678254 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.328472 4719 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.328669 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:21.328643749 +0000 UTC m=+37.659917071 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.333563 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.347968 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.361004 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.374008 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.375748 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.375775 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.375783 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.375797 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.375812 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:13Z","lastTransitionTime":"2025-11-24T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.385518 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.397295 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.413759 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.429426 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f08cb2a9-92db-4e49-b823-2dff920fb6f1-host\") pod \"node-ca-2tjfc\" (UID: \"f08cb2a9-92db-4e49-b823-2dff920fb6f1\") " pod="openshift-image-registry/node-ca-2tjfc" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.429478 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.429503 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx82j\" (UniqueName: \"kubernetes.io/projected/f08cb2a9-92db-4e49-b823-2dff920fb6f1-kube-api-access-qx82j\") pod \"node-ca-2tjfc\" (UID: \"f08cb2a9-92db-4e49-b823-2dff920fb6f1\") " pod="openshift-image-registry/node-ca-2tjfc" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.429543 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f08cb2a9-92db-4e49-b823-2dff920fb6f1-serviceca\") pod \"node-ca-2tjfc\" (UID: \"f08cb2a9-92db-4e49-b823-2dff920fb6f1\") " pod="openshift-image-registry/node-ca-2tjfc" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.429636 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.429767 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.429824 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.429843 4719 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.429857 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.429880 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.429895 4719 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.429918 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:21.429896427 +0000 UTC m=+37.761169669 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.429945 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:21.429929828 +0000 UTC m=+37.761203080 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.430963 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.444686 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.460140 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.474152 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.478424 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.478470 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.478482 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.478501 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.478515 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:13Z","lastTransitionTime":"2025-11-24T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.489396 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.520760 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.520770 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.520934 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.521064 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.521204 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:13 crc kubenswrapper[4719]: E1124 08:54:13.521347 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.531126 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f08cb2a9-92db-4e49-b823-2dff920fb6f1-host\") pod \"node-ca-2tjfc\" (UID: \"f08cb2a9-92db-4e49-b823-2dff920fb6f1\") " pod="openshift-image-registry/node-ca-2tjfc" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.531129 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f08cb2a9-92db-4e49-b823-2dff920fb6f1-host\") pod \"node-ca-2tjfc\" (UID: \"f08cb2a9-92db-4e49-b823-2dff920fb6f1\") " pod="openshift-image-registry/node-ca-2tjfc" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.531201 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx82j\" (UniqueName: \"kubernetes.io/projected/f08cb2a9-92db-4e49-b823-2dff920fb6f1-kube-api-access-qx82j\") pod \"node-ca-2tjfc\" (UID: \"f08cb2a9-92db-4e49-b823-2dff920fb6f1\") " pod="openshift-image-registry/node-ca-2tjfc" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.531225 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f08cb2a9-92db-4e49-b823-2dff920fb6f1-serviceca\") pod \"node-ca-2tjfc\" (UID: \"f08cb2a9-92db-4e49-b823-2dff920fb6f1\") " pod="openshift-image-registry/node-ca-2tjfc" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.551454 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx82j\" (UniqueName: \"kubernetes.io/projected/f08cb2a9-92db-4e49-b823-2dff920fb6f1-kube-api-access-qx82j\") pod \"node-ca-2tjfc\" (UID: \"f08cb2a9-92db-4e49-b823-2dff920fb6f1\") " pod="openshift-image-registry/node-ca-2tjfc" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.581252 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.581297 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.581306 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.581324 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.581334 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:13Z","lastTransitionTime":"2025-11-24T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.670207 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f08cb2a9-92db-4e49-b823-2dff920fb6f1-serviceca\") pod \"node-ca-2tjfc\" (UID: \"f08cb2a9-92db-4e49-b823-2dff920fb6f1\") " pod="openshift-image-registry/node-ca-2tjfc" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.683877 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.683956 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.683971 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.683994 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.684022 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:13Z","lastTransitionTime":"2025-11-24T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.786259 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.786293 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.786302 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.786318 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.786330 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:13Z","lastTransitionTime":"2025-11-24T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.789958 4719 generic.go:334] "Generic (PLEG): container finished" podID="c0b9662b-e98a-4933-8790-0dc5dc9f27b7" containerID="d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4" exitCode=0 Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.790014 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" event={"ID":"c0b9662b-e98a-4933-8790-0dc5dc9f27b7","Type":"ContainerDied","Data":"d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4"} Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.815186 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.836022 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.852785 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.869203 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.885762 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.893199 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2tjfc" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.897553 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.897592 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.897606 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.897649 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.897661 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:13Z","lastTransitionTime":"2025-11-24T08:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.905523 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.922156 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.940974 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.960180 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.975174 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:13 crc kubenswrapper[4719]: I1124 08:54:13.991114 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.000885 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.000929 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.000944 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.000967 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.000982 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:14Z","lastTransitionTime":"2025-11-24T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.009502 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.028279 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.050506 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.108706 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.108735 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.108744 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.108759 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.108769 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:14Z","lastTransitionTime":"2025-11-24T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.211400 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.211452 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.211471 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.211494 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.211507 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:14Z","lastTransitionTime":"2025-11-24T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.315226 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.315266 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.315275 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.315293 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.315304 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:14Z","lastTransitionTime":"2025-11-24T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.318925 4719 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.418265 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.418309 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.418319 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.418342 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.418353 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:14Z","lastTransitionTime":"2025-11-24T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.525602 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.525643 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.525653 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.525670 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.525679 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:14Z","lastTransitionTime":"2025-11-24T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.552555 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.574785 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.593605 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.608619 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.626918 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.629587 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.629700 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.629769 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.629856 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.629951 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:14Z","lastTransitionTime":"2025-11-24T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.652726 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.676617 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.695178 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.712322 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.731758 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.732673 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.732739 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.732754 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.732777 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.732791 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:14Z","lastTransitionTime":"2025-11-24T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.754523 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.769570 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.784366 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.800494 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.829199 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" event={"ID":"c0b9662b-e98a-4933-8790-0dc5dc9f27b7","Type":"ContainerStarted","Data":"7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326"} Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.836620 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2tjfc" event={"ID":"f08cb2a9-92db-4e49-b823-2dff920fb6f1","Type":"ContainerStarted","Data":"b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a"} Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.836703 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2tjfc" event={"ID":"f08cb2a9-92db-4e49-b823-2dff920fb6f1","Type":"ContainerStarted","Data":"3020fdd35f9285fdd557445bd912b0281457a5b8f9cd0c707eb89e436bf5c769"} Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.837493 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.837523 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.837533 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.837547 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.837557 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:14Z","lastTransitionTime":"2025-11-24T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.840514 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerStarted","Data":"28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e"} Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.840826 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.840981 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.859384 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.882827 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.884283 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.897105 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.898251 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.914616 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.930473 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.941324 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.941357 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.941368 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.941382 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.941392 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:14Z","lastTransitionTime":"2025-11-24T08:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.963125 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:14 crc kubenswrapper[4719]: I1124 08:54:14.987352 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.011349 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.027868 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.044370 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.044446 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.044458 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.044480 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.044495 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:15Z","lastTransitionTime":"2025-11-24T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.057211 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.072292 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.090921 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.105325 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.121855 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.140848 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.148312 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.148378 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.148393 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.148417 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.148432 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:15Z","lastTransitionTime":"2025-11-24T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.158423 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.172612 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.191111 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.221958 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.246257 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.251440 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.251488 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.251501 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.251523 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.251537 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:15Z","lastTransitionTime":"2025-11-24T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.265238 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.315306 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.335929 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.352684 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.353481 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.353514 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.353522 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.353536 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.353545 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:15Z","lastTransitionTime":"2025-11-24T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.373544 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.394135 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.415662 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.431289 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.455512 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.455553 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.455567 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.455585 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.455600 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:15Z","lastTransitionTime":"2025-11-24T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.521354 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.521438 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.521447 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:15 crc kubenswrapper[4719]: E1124 08:54:15.521503 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:15 crc kubenswrapper[4719]: E1124 08:54:15.521633 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:15 crc kubenswrapper[4719]: E1124 08:54:15.521725 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.558680 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.558729 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.558740 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.558757 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.558767 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:15Z","lastTransitionTime":"2025-11-24T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.662696 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.662762 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.662779 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.662810 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.662823 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:15Z","lastTransitionTime":"2025-11-24T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.765112 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.765160 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.765172 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.765190 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.765206 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:15Z","lastTransitionTime":"2025-11-24T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.846027 4719 generic.go:334] "Generic (PLEG): container finished" podID="c0b9662b-e98a-4933-8790-0dc5dc9f27b7" containerID="7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326" exitCode=0 Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.846212 4719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.846809 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" event={"ID":"c0b9662b-e98a-4933-8790-0dc5dc9f27b7","Type":"ContainerDied","Data":"7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326"} Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.868716 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.870333 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.870362 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.870370 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.870386 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.870396 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:15Z","lastTransitionTime":"2025-11-24T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.887995 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.902912 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.917568 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.936970 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.951827 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.969019 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.974924 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.974987 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.975003 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.975027 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.975077 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:15Z","lastTransitionTime":"2025-11-24T08:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:15 crc kubenswrapper[4719]: I1124 08:54:15.988834 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:15Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.004074 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.018533 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.034448 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.050524 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.068263 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.079610 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.079660 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.079674 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.079696 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.079709 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:16Z","lastTransitionTime":"2025-11-24T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.083624 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.182517 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.182797 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.182872 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.182956 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.183103 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:16Z","lastTransitionTime":"2025-11-24T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.285759 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.286118 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.286205 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.286303 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.286366 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:16Z","lastTransitionTime":"2025-11-24T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.389579 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.389907 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.390003 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.390120 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.390211 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:16Z","lastTransitionTime":"2025-11-24T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.492278 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.492313 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.492327 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.492342 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.492352 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:16Z","lastTransitionTime":"2025-11-24T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.595757 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.596112 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.596346 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.596440 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.596518 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:16Z","lastTransitionTime":"2025-11-24T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.699254 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.699312 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.699326 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.699350 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.699365 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:16Z","lastTransitionTime":"2025-11-24T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.801717 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.801752 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.801761 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.801775 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.801784 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:16Z","lastTransitionTime":"2025-11-24T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.855643 4719 generic.go:334] "Generic (PLEG): container finished" podID="c0b9662b-e98a-4933-8790-0dc5dc9f27b7" containerID="e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856" exitCode=0 Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.855781 4719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.856185 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" event={"ID":"c0b9662b-e98a-4933-8790-0dc5dc9f27b7","Type":"ContainerDied","Data":"e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856"} Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.875294 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.893002 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.903880 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.904132 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.904249 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.904348 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.904426 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:16Z","lastTransitionTime":"2025-11-24T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.911298 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.927428 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.942407 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.957211 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.972792 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:16 crc kubenswrapper[4719]: I1124 08:54:16.987758 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:16Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.004096 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:17Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.007135 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.007169 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.007180 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.007199 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.007213 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:17Z","lastTransitionTime":"2025-11-24T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.021652 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:17Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.036222 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:17Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.048879 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:17Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.062656 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:17Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.091977 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:17Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.110181 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.110232 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.110244 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.110261 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.110273 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:17Z","lastTransitionTime":"2025-11-24T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.212571 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.212614 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.212628 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.212644 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.212654 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:17Z","lastTransitionTime":"2025-11-24T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.315086 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.315181 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.315196 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.315214 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.315224 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:17Z","lastTransitionTime":"2025-11-24T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.417287 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.417328 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.417341 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.417358 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.417370 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:17Z","lastTransitionTime":"2025-11-24T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.519256 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.519301 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.519313 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.519327 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.519337 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:17Z","lastTransitionTime":"2025-11-24T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.519837 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.519837 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:17 crc kubenswrapper[4719]: E1124 08:54:17.520007 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:17 crc kubenswrapper[4719]: E1124 08:54:17.519928 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.520227 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:17 crc kubenswrapper[4719]: E1124 08:54:17.520314 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.621746 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.621789 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.621797 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.621814 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.621824 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:17Z","lastTransitionTime":"2025-11-24T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.723869 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.723901 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.723910 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.723926 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.723936 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:17Z","lastTransitionTime":"2025-11-24T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.834205 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.834260 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.834270 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.834292 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.834303 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:17Z","lastTransitionTime":"2025-11-24T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.862124 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" event={"ID":"c0b9662b-e98a-4933-8790-0dc5dc9f27b7","Type":"ContainerStarted","Data":"886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb"} Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.880876 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:17Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.897773 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:17Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.917805 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:17Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.931165 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:17Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.936384 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.936426 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.936438 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.936456 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.936470 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:17Z","lastTransitionTime":"2025-11-24T08:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.942537 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:17Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.967120 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:17Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:17 crc kubenswrapper[4719]: I1124 08:54:17.994519 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:17Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.010242 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.024872 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.038540 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.038563 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.038572 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.038584 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.038596 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:18Z","lastTransitionTime":"2025-11-24T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.040072 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.057643 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.076939 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.089559 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.109203 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.141031 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.141343 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.141419 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.141514 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.141618 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:18Z","lastTransitionTime":"2025-11-24T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.244860 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.244911 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.244921 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.244940 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.244952 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:18Z","lastTransitionTime":"2025-11-24T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.348129 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.348175 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.348186 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.348205 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.348216 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:18Z","lastTransitionTime":"2025-11-24T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.450685 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.450746 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.450758 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.450771 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.450781 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:18Z","lastTransitionTime":"2025-11-24T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.553740 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.553786 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.553797 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.553814 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.553826 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:18Z","lastTransitionTime":"2025-11-24T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.656283 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.656329 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.656338 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.656356 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.656366 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:18Z","lastTransitionTime":"2025-11-24T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.759349 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.759402 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.759413 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.759431 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.759440 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:18Z","lastTransitionTime":"2025-11-24T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.862528 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.862570 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.862581 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.862597 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.862607 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:18Z","lastTransitionTime":"2025-11-24T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.866894 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/0.log" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.869858 4719 generic.go:334] "Generic (PLEG): container finished" podID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerID="28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e" exitCode=1 Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.869904 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerDied","Data":"28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e"} Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.871230 4719 scope.go:117] "RemoveContainer" containerID="28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.885395 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.900447 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.914447 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.927838 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.942825 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.957499 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.965104 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.965141 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.965151 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.965167 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.965178 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:18Z","lastTransitionTime":"2025-11-24T08:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.972346 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:18 crc kubenswrapper[4719]: I1124 08:54:18.991039 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:18Z\\\",\\\"message\\\":\\\"oval\\\\nI1124 08:54:18.619175 5825 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 08:54:18.619187 5825 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 08:54:18.619177 5825 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 08:54:18.619207 5825 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 08:54:18.619223 5825 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 08:54:18.619228 5825 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 08:54:18.619273 5825 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 08:54:18.619539 5825 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 08:54:18.619570 5825 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 08:54:18.619578 5825 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 08:54:18.619597 5825 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 08:54:18.619616 5825 factory.go:656] Stopping watch factory\\\\nI1124 08:54:18.619637 5825 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:18.619666 5825 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 08:54:18.619680 5825 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 08:54:18.619691 5825 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 08:54:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:18Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.007606 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:19Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.024418 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:19Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.041787 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:19Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.058106 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:19Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.068095 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.068144 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.068159 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.068176 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.068186 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:19Z","lastTransitionTime":"2025-11-24T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.070763 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:19Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.088925 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:19Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.170605 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.170996 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.171006 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.171023 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.171059 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:19Z","lastTransitionTime":"2025-11-24T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.273914 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.273953 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.273961 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.273975 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.273994 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:19Z","lastTransitionTime":"2025-11-24T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.376067 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.376116 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.376129 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.376149 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.376165 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:19Z","lastTransitionTime":"2025-11-24T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.478348 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.478391 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.478408 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.478423 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.478434 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:19Z","lastTransitionTime":"2025-11-24T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.520457 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.520457 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:19 crc kubenswrapper[4719]: E1124 08:54:19.520590 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:19 crc kubenswrapper[4719]: E1124 08:54:19.520641 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.520477 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:19 crc kubenswrapper[4719]: E1124 08:54:19.520695 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.580656 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.580687 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.580695 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.580708 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.580718 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:19Z","lastTransitionTime":"2025-11-24T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.683105 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.683155 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.683165 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.683179 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.683191 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:19Z","lastTransitionTime":"2025-11-24T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.785553 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.785592 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.785602 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.785618 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.785627 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:19Z","lastTransitionTime":"2025-11-24T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.877193 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/0.log" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.881634 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerStarted","Data":"e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88"} Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.888075 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.888112 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.888123 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.888140 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.888152 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:19Z","lastTransitionTime":"2025-11-24T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.991007 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.991067 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.991079 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.991096 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:19 crc kubenswrapper[4719]: I1124 08:54:19.991108 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:19Z","lastTransitionTime":"2025-11-24T08:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.094295 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.094333 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.094344 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.094359 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.094369 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:20Z","lastTransitionTime":"2025-11-24T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.196997 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.197058 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.197071 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.197091 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.197102 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:20Z","lastTransitionTime":"2025-11-24T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.299864 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.299911 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.299924 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.299946 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.299959 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:20Z","lastTransitionTime":"2025-11-24T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.401974 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.402007 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.402014 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.402027 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.402056 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:20Z","lastTransitionTime":"2025-11-24T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.505212 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.505257 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.505266 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.505281 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.505290 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:20Z","lastTransitionTime":"2025-11-24T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.521435 4719 scope.go:117] "RemoveContainer" containerID="60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.609951 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.609993 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.610003 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.610024 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.610038 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:20Z","lastTransitionTime":"2025-11-24T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.711747 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.711807 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.711819 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.711842 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.711853 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:20Z","lastTransitionTime":"2025-11-24T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.814755 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.814804 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.814816 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.814836 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.814849 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:20Z","lastTransitionTime":"2025-11-24T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.848761 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt"] Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.849315 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.852200 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.852272 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.880441 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:20Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.887507 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.889536 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db"} Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.890560 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.892180 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/1.log" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.892854 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/0.log" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.895536 4719 generic.go:334] "Generic (PLEG): container finished" podID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerID="e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88" exitCode=1 Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.895575 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerDied","Data":"e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88"} Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.895605 4719 scope.go:117] "RemoveContainer" containerID="28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.896556 4719 scope.go:117] "RemoveContainer" containerID="e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88" Nov 24 08:54:20 crc kubenswrapper[4719]: E1124 08:54:20.896781 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.904315 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:20Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.922709 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.923423 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.923514 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.923591 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.923660 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:20Z","lastTransitionTime":"2025-11-24T08:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.925413 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:20Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.936553 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:20Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.957483 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:18Z\\\",\\\"message\\\":\\\"oval\\\\nI1124 08:54:18.619175 5825 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 08:54:18.619187 5825 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 08:54:18.619177 5825 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 08:54:18.619207 5825 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 08:54:18.619223 5825 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 08:54:18.619228 5825 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 08:54:18.619273 5825 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 08:54:18.619539 5825 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 08:54:18.619570 5825 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 08:54:18.619578 5825 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 08:54:18.619597 5825 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 08:54:18.619616 5825 factory.go:656] Stopping watch factory\\\\nI1124 08:54:18.619637 5825 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:18.619666 5825 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 08:54:18.619680 5825 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 08:54:18.619691 5825 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 08:54:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:20Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.973274 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:20Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:20 crc kubenswrapper[4719]: I1124 08:54:20.986316 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:20Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.000442 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:20Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.017175 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.022812 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvlxq\" (UniqueName: \"kubernetes.io/projected/7232e685-76c0-4605-8690-a19e65efdddf-kube-api-access-jvlxq\") pod \"ovnkube-control-plane-749d76644c-pvkgt\" (UID: \"7232e685-76c0-4605-8690-a19e65efdddf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.022856 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7232e685-76c0-4605-8690-a19e65efdddf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pvkgt\" (UID: \"7232e685-76c0-4605-8690-a19e65efdddf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.022875 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7232e685-76c0-4605-8690-a19e65efdddf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pvkgt\" (UID: \"7232e685-76c0-4605-8690-a19e65efdddf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.022932 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7232e685-76c0-4605-8690-a19e65efdddf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pvkgt\" (UID: \"7232e685-76c0-4605-8690-a19e65efdddf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.026881 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.026927 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.026937 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.026956 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.026970 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:21Z","lastTransitionTime":"2025-11-24T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.032390 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.045464 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.062726 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.085473 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.102627 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.116944 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.123485 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7232e685-76c0-4605-8690-a19e65efdddf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pvkgt\" (UID: \"7232e685-76c0-4605-8690-a19e65efdddf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.123651 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvlxq\" (UniqueName: \"kubernetes.io/projected/7232e685-76c0-4605-8690-a19e65efdddf-kube-api-access-jvlxq\") pod \"ovnkube-control-plane-749d76644c-pvkgt\" (UID: \"7232e685-76c0-4605-8690-a19e65efdddf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.123740 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7232e685-76c0-4605-8690-a19e65efdddf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pvkgt\" (UID: \"7232e685-76c0-4605-8690-a19e65efdddf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.123839 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7232e685-76c0-4605-8690-a19e65efdddf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pvkgt\" (UID: \"7232e685-76c0-4605-8690-a19e65efdddf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.124496 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7232e685-76c0-4605-8690-a19e65efdddf-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pvkgt\" (UID: \"7232e685-76c0-4605-8690-a19e65efdddf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.124579 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7232e685-76c0-4605-8690-a19e65efdddf-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pvkgt\" (UID: \"7232e685-76c0-4605-8690-a19e65efdddf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.129068 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.129096 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.129107 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.129127 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.129140 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:21Z","lastTransitionTime":"2025-11-24T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.132762 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7232e685-76c0-4605-8690-a19e65efdddf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pvkgt\" (UID: \"7232e685-76c0-4605-8690-a19e65efdddf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.138985 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.146545 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvlxq\" (UniqueName: \"kubernetes.io/projected/7232e685-76c0-4605-8690-a19e65efdddf-kube-api-access-jvlxq\") pod \"ovnkube-control-plane-749d76644c-pvkgt\" (UID: \"7232e685-76c0-4605-8690-a19e65efdddf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.160886 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.163339 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.175990 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: W1124 08:54:21.178292 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7232e685_76c0_4605_8690_a19e65efdddf.slice/crio-15efb621378b99f37672b19ce23b13e3ce8716c80151a73c3121ee97b3d8490f WatchSource:0}: Error finding container 15efb621378b99f37672b19ce23b13e3ce8716c80151a73c3121ee97b3d8490f: Status 404 returned error can't find the container with id 15efb621378b99f37672b19ce23b13e3ce8716c80151a73c3121ee97b3d8490f Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.195096 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.208318 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.233728 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.233778 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.233790 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.233807 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.233819 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:21Z","lastTransitionTime":"2025-11-24T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.237920 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:18Z\\\",\\\"message\\\":\\\"oval\\\\nI1124 08:54:18.619175 5825 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 08:54:18.619187 5825 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 08:54:18.619177 5825 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 08:54:18.619207 5825 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 08:54:18.619223 5825 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 08:54:18.619228 5825 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 08:54:18.619273 5825 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 08:54:18.619539 5825 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 08:54:18.619570 5825 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 08:54:18.619578 5825 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 08:54:18.619597 5825 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 08:54:18.619616 5825 factory.go:656] Stopping watch factory\\\\nI1124 08:54:18.619637 5825 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:18.619666 5825 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 08:54:18.619680 5825 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 08:54:18.619691 5825 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 08:54:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\" 6063 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-fvqzq after 0 failed attempt(s)\\\\nI1124 08:54:20.691554 6063 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-fvqzq\\\\nI1124 08:54:20.691102 6063 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:54:20.691571 6063 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:20.691669 6063 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializat\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.252404 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.269709 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.287800 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.306814 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.325526 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.326258 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.326444 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:54:37.326420139 +0000 UTC m=+53.657693431 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.336453 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.336483 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.336493 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.336511 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.336522 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:21Z","lastTransitionTime":"2025-11-24T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.341449 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.358333 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.374192 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.389774 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.427531 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.427603 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.427729 4719 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.427762 4719 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.427833 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:37.427808681 +0000 UTC m=+53.759082113 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.427861 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:37.427849282 +0000 UTC m=+53.759122754 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.438939 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.439099 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.439193 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.439290 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.439382 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:21Z","lastTransitionTime":"2025-11-24T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.520067 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.520132 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.520352 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.520229 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.520374 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.520653 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.532704 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.532773 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.532898 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.532919 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.532930 4719 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.532983 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:37.532964182 +0000 UTC m=+53.864237434 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.533059 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.533076 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.533085 4719 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.533112 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:37.533104966 +0000 UTC m=+53.864378218 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.542484 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.542521 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.542532 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.542553 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.542563 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:21Z","lastTransitionTime":"2025-11-24T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.619203 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5hv9d"] Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.619672 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.619733 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.636266 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.645485 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.645528 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.645538 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.645553 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.645562 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:21Z","lastTransitionTime":"2025-11-24T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.657383 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.677362 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.692587 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.711440 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.724984 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.734730 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k665\" (UniqueName: \"kubernetes.io/projected/bd6beab7-bbb8-4abb-98b1-60c1f8360757-kube-api-access-2k665\") pod \"network-metrics-daemon-5hv9d\" (UID: \"bd6beab7-bbb8-4abb-98b1-60c1f8360757\") " pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.734814 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs\") pod \"network-metrics-daemon-5hv9d\" (UID: \"bd6beab7-bbb8-4abb-98b1-60c1f8360757\") " pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.741118 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.748085 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.748127 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.748137 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.748153 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.748163 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:21Z","lastTransitionTime":"2025-11-24T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.757619 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.776806 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.790555 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.803386 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.815933 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.836258 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k665\" (UniqueName: \"kubernetes.io/projected/bd6beab7-bbb8-4abb-98b1-60c1f8360757-kube-api-access-2k665\") pod \"network-metrics-daemon-5hv9d\" (UID: \"bd6beab7-bbb8-4abb-98b1-60c1f8360757\") " pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.837112 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs\") pod \"network-metrics-daemon-5hv9d\" (UID: \"bd6beab7-bbb8-4abb-98b1-60c1f8360757\") " pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.837329 4719 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.837386 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs podName:bd6beab7-bbb8-4abb-98b1-60c1f8360757 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:22.337371584 +0000 UTC m=+38.668644836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs") pod "network-metrics-daemon-5hv9d" (UID: "bd6beab7-bbb8-4abb-98b1-60c1f8360757") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.838366 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:18Z\\\",\\\"message\\\":\\\"oval\\\\nI1124 08:54:18.619175 5825 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 08:54:18.619187 5825 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 08:54:18.619177 5825 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 08:54:18.619207 5825 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 08:54:18.619223 5825 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 08:54:18.619228 5825 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 08:54:18.619273 5825 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 08:54:18.619539 5825 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 08:54:18.619570 5825 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 08:54:18.619578 5825 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 08:54:18.619597 5825 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 08:54:18.619616 5825 factory.go:656] Stopping watch factory\\\\nI1124 08:54:18.619637 5825 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:18.619666 5825 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 08:54:18.619680 5825 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 08:54:18.619691 5825 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 08:54:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\" 6063 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-fvqzq after 0 failed attempt(s)\\\\nI1124 08:54:20.691554 6063 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-fvqzq\\\\nI1124 08:54:20.691102 6063 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:54:20.691571 6063 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:20.691669 6063 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializat\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.850877 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.850918 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.850928 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.850945 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.850957 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:21Z","lastTransitionTime":"2025-11-24T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.854990 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.856012 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k665\" (UniqueName: \"kubernetes.io/projected/bd6beab7-bbb8-4abb-98b1-60c1f8360757-kube-api-access-2k665\") pod \"network-metrics-daemon-5hv9d\" (UID: \"bd6beab7-bbb8-4abb-98b1-60c1f8360757\") " pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.873583 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.891518 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.902966 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" event={"ID":"7232e685-76c0-4605-8690-a19e65efdddf","Type":"ContainerStarted","Data":"ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3"} Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.903016 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" event={"ID":"7232e685-76c0-4605-8690-a19e65efdddf","Type":"ContainerStarted","Data":"0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5"} Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.903026 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" event={"ID":"7232e685-76c0-4605-8690-a19e65efdddf","Type":"ContainerStarted","Data":"15efb621378b99f37672b19ce23b13e3ce8716c80151a73c3121ee97b3d8490f"} Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.905233 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/1.log" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.908482 4719 scope.go:117] "RemoveContainer" containerID="e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88" Nov 24 08:54:21 crc kubenswrapper[4719]: E1124 08:54:21.908602 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.919688 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.933333 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.948446 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.953077 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.953114 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.953125 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.953144 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.953158 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:21Z","lastTransitionTime":"2025-11-24T08:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.961886 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.974920 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:21 crc kubenswrapper[4719]: I1124 08:54:21.988102 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:21Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.002389 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.016169 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.028935 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.041176 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.056791 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.056822 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.056832 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.056846 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.056857 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:22Z","lastTransitionTime":"2025-11-24T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.061382 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28b52331a79b1003dbfbc262cd0905d173a49711e9529c578600f3aba892042e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:18Z\\\",\\\"message\\\":\\\"oval\\\\nI1124 08:54:18.619175 5825 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1124 08:54:18.619187 5825 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 08:54:18.619177 5825 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 08:54:18.619207 5825 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 08:54:18.619223 5825 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 08:54:18.619228 5825 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 08:54:18.619273 5825 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 08:54:18.619539 5825 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 08:54:18.619570 5825 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 08:54:18.619578 5825 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1124 08:54:18.619597 5825 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 08:54:18.619616 5825 factory.go:656] Stopping watch factory\\\\nI1124 08:54:18.619637 5825 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:18.619666 5825 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 08:54:18.619680 5825 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 08:54:18.619691 5825 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 08:54:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\" 6063 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-fvqzq after 0 failed attempt(s)\\\\nI1124 08:54:20.691554 6063 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-fvqzq\\\\nI1124 08:54:20.691102 6063 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:54:20.691571 6063 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:20.691669 6063 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializat\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.076287 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.091579 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.111842 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.128079 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.140735 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.158866 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.158916 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.158929 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.158947 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.158958 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:22Z","lastTransitionTime":"2025-11-24T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.160399 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.172896 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.185162 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.199488 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.211030 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.220687 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.220722 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.220732 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.220746 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.220755 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:22Z","lastTransitionTime":"2025-11-24T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.224475 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: E1124 08:54:22.233638 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.237447 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.237493 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.237504 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.237519 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.237529 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:22Z","lastTransitionTime":"2025-11-24T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.238226 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: E1124 08:54:22.250898 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.251379 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.255351 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.255386 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.255397 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.255412 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.255423 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:22Z","lastTransitionTime":"2025-11-24T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.262196 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: E1124 08:54:22.267235 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.270232 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.270279 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.270291 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.270308 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.270320 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:22Z","lastTransitionTime":"2025-11-24T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:22 crc kubenswrapper[4719]: E1124 08:54:22.281517 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.281745 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\" 6063 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-fvqzq after 0 failed attempt(s)\\\\nI1124 08:54:20.691554 6063 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-fvqzq\\\\nI1124 08:54:20.691102 6063 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:54:20.691571 6063 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:20.691669 6063 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializat\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.285932 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.285969 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.285978 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.285994 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.286004 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:22Z","lastTransitionTime":"2025-11-24T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.295707 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: E1124 08:54:22.297798 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: E1124 08:54:22.297960 4719 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.299506 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.299549 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.299561 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.299577 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.299588 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:22Z","lastTransitionTime":"2025-11-24T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.311132 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.325368 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.338608 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.342754 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs\") pod \"network-metrics-daemon-5hv9d\" (UID: \"bd6beab7-bbb8-4abb-98b1-60c1f8360757\") " pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:22 crc kubenswrapper[4719]: E1124 08:54:22.342888 4719 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:54:22 crc kubenswrapper[4719]: E1124 08:54:22.342974 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs podName:bd6beab7-bbb8-4abb-98b1-60c1f8360757 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:23.342956036 +0000 UTC m=+39.674229278 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs") pod "network-metrics-daemon-5hv9d" (UID: "bd6beab7-bbb8-4abb-98b1-60c1f8360757") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.350741 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.362658 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:22Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.402685 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.402737 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.402751 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.402769 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.402780 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:22Z","lastTransitionTime":"2025-11-24T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.505643 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.505691 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.505703 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.505722 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.505735 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:22Z","lastTransitionTime":"2025-11-24T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.608124 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.608168 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.608177 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.608200 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.608209 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:22Z","lastTransitionTime":"2025-11-24T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.710083 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.710122 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.710131 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.710151 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.710168 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:22Z","lastTransitionTime":"2025-11-24T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.813383 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.813429 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.813439 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.813457 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.813467 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:22Z","lastTransitionTime":"2025-11-24T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.915828 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.915866 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.915874 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.915889 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:22 crc kubenswrapper[4719]: I1124 08:54:22.915898 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:22Z","lastTransitionTime":"2025-11-24T08:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.018649 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.018685 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.018702 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.018722 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.018734 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:23Z","lastTransitionTime":"2025-11-24T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.121686 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.121729 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.121737 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.121753 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.121763 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:23Z","lastTransitionTime":"2025-11-24T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.223973 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.224012 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.224023 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.224095 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.224110 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:23Z","lastTransitionTime":"2025-11-24T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.326269 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.326331 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.326347 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.326368 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.326382 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:23Z","lastTransitionTime":"2025-11-24T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.352943 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs\") pod \"network-metrics-daemon-5hv9d\" (UID: \"bd6beab7-bbb8-4abb-98b1-60c1f8360757\") " pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:23 crc kubenswrapper[4719]: E1124 08:54:23.353174 4719 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:54:23 crc kubenswrapper[4719]: E1124 08:54:23.353250 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs podName:bd6beab7-bbb8-4abb-98b1-60c1f8360757 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:25.353231862 +0000 UTC m=+41.684505114 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs") pod "network-metrics-daemon-5hv9d" (UID: "bd6beab7-bbb8-4abb-98b1-60c1f8360757") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.428671 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.428706 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.428714 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.428728 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.428737 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:23Z","lastTransitionTime":"2025-11-24T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.520570 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.520635 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.520697 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:23 crc kubenswrapper[4719]: E1124 08:54:23.520724 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.520589 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:23 crc kubenswrapper[4719]: E1124 08:54:23.520848 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:23 crc kubenswrapper[4719]: E1124 08:54:23.520916 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:23 crc kubenswrapper[4719]: E1124 08:54:23.520984 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.532696 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.532732 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.532743 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.532783 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.532803 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:23Z","lastTransitionTime":"2025-11-24T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.635146 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.635178 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.635187 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.635200 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.635209 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:23Z","lastTransitionTime":"2025-11-24T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.737958 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.738273 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.738338 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.738406 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.738485 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:23Z","lastTransitionTime":"2025-11-24T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.841487 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.841755 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.841834 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.841931 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.842014 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:23Z","lastTransitionTime":"2025-11-24T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.943837 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.943885 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.943896 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.943911 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:23 crc kubenswrapper[4719]: I1124 08:54:23.943920 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:23Z","lastTransitionTime":"2025-11-24T08:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.046242 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.046286 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.046295 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.046313 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.046322 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:24Z","lastTransitionTime":"2025-11-24T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.148936 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.148979 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.148991 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.149008 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.149019 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:24Z","lastTransitionTime":"2025-11-24T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.251321 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.251376 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.251395 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.251415 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.251431 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:24Z","lastTransitionTime":"2025-11-24T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.354144 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.354182 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.354191 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.354224 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.354236 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:24Z","lastTransitionTime":"2025-11-24T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.456543 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.456583 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.456591 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.456609 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.456619 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:24Z","lastTransitionTime":"2025-11-24T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.536614 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.550650 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.559411 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.559450 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.559461 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.559479 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.559492 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:24Z","lastTransitionTime":"2025-11-24T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.561997 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.575064 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.589289 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.600655 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.611956 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.632116 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.645565 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.656932 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.661411 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.661452 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.661463 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.661481 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.661511 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:24Z","lastTransitionTime":"2025-11-24T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.667748 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.691848 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\" 6063 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-fvqzq after 0 failed attempt(s)\\\\nI1124 08:54:20.691554 6063 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-fvqzq\\\\nI1124 08:54:20.691102 6063 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:54:20.691571 6063 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:20.691669 6063 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializat\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.705791 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.716870 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.732157 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.748000 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:24Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.763799 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.763850 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.763861 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.763877 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.763890 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:24Z","lastTransitionTime":"2025-11-24T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.866272 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.866609 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.866694 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.866796 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.866861 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:24Z","lastTransitionTime":"2025-11-24T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.970402 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.971114 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.971158 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.971180 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:24 crc kubenswrapper[4719]: I1124 08:54:24.971194 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:24Z","lastTransitionTime":"2025-11-24T08:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.073773 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.073829 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.073841 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.073861 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.073872 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:25Z","lastTransitionTime":"2025-11-24T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.176399 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.176443 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.176454 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.176470 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.176482 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:25Z","lastTransitionTime":"2025-11-24T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.278833 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.278874 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.278885 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.278902 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.278913 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:25Z","lastTransitionTime":"2025-11-24T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.373746 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs\") pod \"network-metrics-daemon-5hv9d\" (UID: \"bd6beab7-bbb8-4abb-98b1-60c1f8360757\") " pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:25 crc kubenswrapper[4719]: E1124 08:54:25.373892 4719 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:54:25 crc kubenswrapper[4719]: E1124 08:54:25.373987 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs podName:bd6beab7-bbb8-4abb-98b1-60c1f8360757 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:29.373959699 +0000 UTC m=+45.705232951 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs") pod "network-metrics-daemon-5hv9d" (UID: "bd6beab7-bbb8-4abb-98b1-60c1f8360757") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.380760 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.380795 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.380805 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.380824 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.380835 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:25Z","lastTransitionTime":"2025-11-24T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.483487 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.483529 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.483544 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.483560 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.483572 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:25Z","lastTransitionTime":"2025-11-24T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.520212 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.520254 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.520317 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:25 crc kubenswrapper[4719]: E1124 08:54:25.520364 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:25 crc kubenswrapper[4719]: E1124 08:54:25.520457 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:25 crc kubenswrapper[4719]: E1124 08:54:25.520539 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.520608 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:25 crc kubenswrapper[4719]: E1124 08:54:25.520670 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.585626 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.585665 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.585674 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.585688 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.585698 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:25Z","lastTransitionTime":"2025-11-24T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.687417 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.687453 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.687464 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.687481 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.687494 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:25Z","lastTransitionTime":"2025-11-24T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.792874 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.792965 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.792983 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.793006 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.793024 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:25Z","lastTransitionTime":"2025-11-24T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.897164 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.897228 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.897242 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.897259 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:25 crc kubenswrapper[4719]: I1124 08:54:25.897291 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:25Z","lastTransitionTime":"2025-11-24T08:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.000665 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.000717 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.000731 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.000751 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.000763 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:26Z","lastTransitionTime":"2025-11-24T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.103629 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.103665 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.103675 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.103692 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.103704 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:26Z","lastTransitionTime":"2025-11-24T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.207209 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.207261 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.207274 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.207293 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.207305 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:26Z","lastTransitionTime":"2025-11-24T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.309226 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.309271 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.309280 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.309296 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.309306 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:26Z","lastTransitionTime":"2025-11-24T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.411913 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.411949 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.411958 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.411974 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.411983 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:26Z","lastTransitionTime":"2025-11-24T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.514062 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.514104 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.514114 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.514130 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.514141 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:26Z","lastTransitionTime":"2025-11-24T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.616657 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.616699 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.616710 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.616728 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.616740 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:26Z","lastTransitionTime":"2025-11-24T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.719614 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.720196 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.720299 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.720390 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.720522 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:26Z","lastTransitionTime":"2025-11-24T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.822885 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.822923 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.822933 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.822949 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.822959 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:26Z","lastTransitionTime":"2025-11-24T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.924987 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.925019 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.925027 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.925060 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:26 crc kubenswrapper[4719]: I1124 08:54:26.925070 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:26Z","lastTransitionTime":"2025-11-24T08:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.027796 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.027835 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.027847 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.027864 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.027877 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:27Z","lastTransitionTime":"2025-11-24T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.130252 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.130298 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.130311 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.130331 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.130346 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:27Z","lastTransitionTime":"2025-11-24T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.232641 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.232676 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.232703 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.232718 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.232728 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:27Z","lastTransitionTime":"2025-11-24T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.336140 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.336559 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.336637 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.336706 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.336798 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:27Z","lastTransitionTime":"2025-11-24T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.439127 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.439173 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.439185 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.439203 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.439216 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:27Z","lastTransitionTime":"2025-11-24T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.520191 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.520235 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.520264 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.520688 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:27 crc kubenswrapper[4719]: E1124 08:54:27.520846 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:27 crc kubenswrapper[4719]: E1124 08:54:27.521057 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:27 crc kubenswrapper[4719]: E1124 08:54:27.521157 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:27 crc kubenswrapper[4719]: E1124 08:54:27.521283 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.541576 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.541842 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.541934 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.542065 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.542158 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:27Z","lastTransitionTime":"2025-11-24T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.645090 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.645131 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.645140 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.645156 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.645165 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:27Z","lastTransitionTime":"2025-11-24T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.747112 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.747192 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.747208 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.747224 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.747233 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:27Z","lastTransitionTime":"2025-11-24T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.849776 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.849813 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.849822 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.849836 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.849848 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:27Z","lastTransitionTime":"2025-11-24T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.952072 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.952126 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.952137 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.952155 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:27 crc kubenswrapper[4719]: I1124 08:54:27.952166 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:27Z","lastTransitionTime":"2025-11-24T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.054811 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.054875 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.054887 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.054903 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.054912 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:28Z","lastTransitionTime":"2025-11-24T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.157512 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.157555 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.157564 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.157583 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.157595 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:28Z","lastTransitionTime":"2025-11-24T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.259665 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.259722 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.259733 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.259753 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.259765 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:28Z","lastTransitionTime":"2025-11-24T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.363491 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.363542 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.363553 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.363581 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.363593 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:28Z","lastTransitionTime":"2025-11-24T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.466284 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.466327 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.466340 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.466355 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.466367 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:28Z","lastTransitionTime":"2025-11-24T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.569335 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.569371 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.569380 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.569395 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.569405 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:28Z","lastTransitionTime":"2025-11-24T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.671729 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.671778 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.671788 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.671806 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.671821 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:28Z","lastTransitionTime":"2025-11-24T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.774564 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.774611 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.774624 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.774641 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.774651 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:28Z","lastTransitionTime":"2025-11-24T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.877343 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.877382 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.877392 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.877408 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.877419 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:28Z","lastTransitionTime":"2025-11-24T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.979611 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.979681 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.979693 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.979710 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:28 crc kubenswrapper[4719]: I1124 08:54:28.979720 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:28Z","lastTransitionTime":"2025-11-24T08:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.082758 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.082825 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.082842 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.082867 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.082883 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:29Z","lastTransitionTime":"2025-11-24T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.185734 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.185810 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.185821 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.185842 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.185854 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:29Z","lastTransitionTime":"2025-11-24T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.288037 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.288095 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.288105 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.288125 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.288137 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:29Z","lastTransitionTime":"2025-11-24T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.390958 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.391309 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.391442 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.391558 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.391637 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:29Z","lastTransitionTime":"2025-11-24T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.413885 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs\") pod \"network-metrics-daemon-5hv9d\" (UID: \"bd6beab7-bbb8-4abb-98b1-60c1f8360757\") " pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:29 crc kubenswrapper[4719]: E1124 08:54:29.414126 4719 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:54:29 crc kubenswrapper[4719]: E1124 08:54:29.414201 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs podName:bd6beab7-bbb8-4abb-98b1-60c1f8360757 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:37.414176948 +0000 UTC m=+53.745450200 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs") pod "network-metrics-daemon-5hv9d" (UID: "bd6beab7-bbb8-4abb-98b1-60c1f8360757") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.495819 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.495873 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.495882 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.495901 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.495913 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:29Z","lastTransitionTime":"2025-11-24T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.520792 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.520879 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.520901 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:29 crc kubenswrapper[4719]: E1124 08:54:29.520991 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:29 crc kubenswrapper[4719]: E1124 08:54:29.521207 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:29 crc kubenswrapper[4719]: E1124 08:54:29.521348 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.521493 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:29 crc kubenswrapper[4719]: E1124 08:54:29.521722 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.598464 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.598758 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.598850 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.598932 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.599002 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:29Z","lastTransitionTime":"2025-11-24T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.701235 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.701277 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.701287 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.701305 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.701315 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:29Z","lastTransitionTime":"2025-11-24T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.803514 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.803558 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.803569 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.803585 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.803597 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:29Z","lastTransitionTime":"2025-11-24T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.905433 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.905482 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.905494 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.905509 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:29 crc kubenswrapper[4719]: I1124 08:54:29.905520 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:29Z","lastTransitionTime":"2025-11-24T08:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.007788 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.007828 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.007838 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.007854 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.007863 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:30Z","lastTransitionTime":"2025-11-24T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.109902 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.109953 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.109966 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.109989 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.110002 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:30Z","lastTransitionTime":"2025-11-24T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.213362 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.213406 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.213417 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.213434 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.213448 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:30Z","lastTransitionTime":"2025-11-24T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.316197 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.316232 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.316242 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.316258 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.316271 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:30Z","lastTransitionTime":"2025-11-24T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.418464 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.418513 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.418525 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.418542 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.418557 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:30Z","lastTransitionTime":"2025-11-24T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.521114 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.521156 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.521167 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.521184 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.521198 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:30Z","lastTransitionTime":"2025-11-24T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.623935 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.623974 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.623983 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.624000 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.624011 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:30Z","lastTransitionTime":"2025-11-24T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.725784 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.725853 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.725868 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.725884 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.725895 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:30Z","lastTransitionTime":"2025-11-24T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.828144 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.828186 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.828195 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.828212 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.828224 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:30Z","lastTransitionTime":"2025-11-24T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.930937 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.930979 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.930991 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.931009 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.931019 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:30Z","lastTransitionTime":"2025-11-24T08:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.941903 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:30 crc kubenswrapper[4719]: I1124 08:54:30.942772 4719 scope.go:117] "RemoveContainer" containerID="e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.032874 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.033250 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.033262 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.033276 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.033284 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:31Z","lastTransitionTime":"2025-11-24T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.136087 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.136129 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.136140 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.136159 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.136171 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:31Z","lastTransitionTime":"2025-11-24T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.239374 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.239411 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.239420 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.239439 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.239450 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:31Z","lastTransitionTime":"2025-11-24T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.341748 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.341798 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.341810 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.341829 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.341841 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:31Z","lastTransitionTime":"2025-11-24T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.443590 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.443624 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.443635 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.443649 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.443659 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:31Z","lastTransitionTime":"2025-11-24T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.520744 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.520776 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.520854 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.520824 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:31 crc kubenswrapper[4719]: E1124 08:54:31.520989 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:31 crc kubenswrapper[4719]: E1124 08:54:31.521196 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:31 crc kubenswrapper[4719]: E1124 08:54:31.521284 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:31 crc kubenswrapper[4719]: E1124 08:54:31.521376 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.546630 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.546676 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.546691 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.546709 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.546723 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:31Z","lastTransitionTime":"2025-11-24T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.648929 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.648971 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.648981 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.649001 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.649014 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:31Z","lastTransitionTime":"2025-11-24T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.751921 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.751978 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.751992 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.752016 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.752031 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:31Z","lastTransitionTime":"2025-11-24T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.855021 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.855081 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.855094 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.855116 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.855128 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:31Z","lastTransitionTime":"2025-11-24T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.939417 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/2.log" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.939963 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/1.log" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.942484 4719 generic.go:334] "Generic (PLEG): container finished" podID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerID="cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2" exitCode=1 Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.942528 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerDied","Data":"cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2"} Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.942570 4719 scope.go:117] "RemoveContainer" containerID="e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.943356 4719 scope.go:117] "RemoveContainer" containerID="cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2" Nov 24 08:54:31 crc kubenswrapper[4719]: E1124 08:54:31.943532 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.958214 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.958262 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.958277 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.958296 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.958309 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:31Z","lastTransitionTime":"2025-11-24T08:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.958232 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.972710 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:31 crc kubenswrapper[4719]: I1124 08:54:31.988605 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:31Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.004959 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.018372 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.030616 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.043386 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.059164 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.060162 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.060212 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.060222 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.060238 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.060250 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:32Z","lastTransitionTime":"2025-11-24T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.072492 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.096621 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\" 6063 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-fvqzq after 0 failed attempt(s)\\\\nI1124 08:54:20.691554 6063 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-fvqzq\\\\nI1124 08:54:20.691102 6063 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:54:20.691571 6063 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:20.691669 6063 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializat\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:31Z\\\",\\\"message\\\":\\\"a1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.764648 6261 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 08:54:31.764819 6261 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.765008 6261 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765125 6261 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765509 6261 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.768498 6261 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:31.768529 6261 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:31.768604 6261 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:31.768629 6261 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:31.768724 6261 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.111175 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.123572 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.139219 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.155544 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.162991 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.163053 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.163064 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.163078 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.163087 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:32Z","lastTransitionTime":"2025-11-24T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.170504 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.181361 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.265868 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.265908 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.265919 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.265935 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.265948 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:32Z","lastTransitionTime":"2025-11-24T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.368573 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.368606 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.368615 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.368631 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.368640 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:32Z","lastTransitionTime":"2025-11-24T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.434143 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.434174 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.434184 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.434198 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.434207 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:32Z","lastTransitionTime":"2025-11-24T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:32 crc kubenswrapper[4719]: E1124 08:54:32.450781 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.455768 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.455808 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.455820 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.455839 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.455851 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:32Z","lastTransitionTime":"2025-11-24T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:32 crc kubenswrapper[4719]: E1124 08:54:32.468134 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.471897 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.472144 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.472219 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.472285 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.472354 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:32Z","lastTransitionTime":"2025-11-24T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:32 crc kubenswrapper[4719]: E1124 08:54:32.486481 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.490067 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.490105 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.490119 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.490139 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.490151 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:32Z","lastTransitionTime":"2025-11-24T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:32 crc kubenswrapper[4719]: E1124 08:54:32.503372 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.507252 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.507293 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.507306 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.507324 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.507338 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:32Z","lastTransitionTime":"2025-11-24T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:32 crc kubenswrapper[4719]: E1124 08:54:32.522549 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:32Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:32 crc kubenswrapper[4719]: E1124 08:54:32.522818 4719 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.524591 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.524616 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.524625 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.524637 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.524646 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:32Z","lastTransitionTime":"2025-11-24T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.627453 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.627510 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.627522 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.627542 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.627555 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:32Z","lastTransitionTime":"2025-11-24T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.729956 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.729995 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.730005 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.730017 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.730027 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:32Z","lastTransitionTime":"2025-11-24T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.832466 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.832542 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.832554 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.832570 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.832578 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:32Z","lastTransitionTime":"2025-11-24T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.935252 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.935289 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.935300 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.935315 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.935328 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:32Z","lastTransitionTime":"2025-11-24T08:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:32 crc kubenswrapper[4719]: I1124 08:54:32.953966 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/2.log" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.037322 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.037353 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.037361 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.037373 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.037383 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:33Z","lastTransitionTime":"2025-11-24T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.139516 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.139583 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.139594 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.139608 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.139618 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:33Z","lastTransitionTime":"2025-11-24T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.241728 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.241766 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.241776 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.241792 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.241804 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:33Z","lastTransitionTime":"2025-11-24T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.344744 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.344801 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.344813 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.344838 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.344856 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:33Z","lastTransitionTime":"2025-11-24T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.447555 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.447601 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.447612 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.447628 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.447638 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:33Z","lastTransitionTime":"2025-11-24T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.520616 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:33 crc kubenswrapper[4719]: E1124 08:54:33.520770 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.521028 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:33 crc kubenswrapper[4719]: E1124 08:54:33.521131 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.521275 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.521304 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:33 crc kubenswrapper[4719]: E1124 08:54:33.521446 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:33 crc kubenswrapper[4719]: E1124 08:54:33.521516 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.550169 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.550210 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.550220 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.550237 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.550248 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:33Z","lastTransitionTime":"2025-11-24T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.652508 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.652551 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.652560 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.652575 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.652585 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:33Z","lastTransitionTime":"2025-11-24T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.754697 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.754757 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.754772 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.754788 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.754799 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:33Z","lastTransitionTime":"2025-11-24T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.857008 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.857054 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.857066 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.857082 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.857091 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:33Z","lastTransitionTime":"2025-11-24T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.958912 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.958947 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.958955 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.958969 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:33 crc kubenswrapper[4719]: I1124 08:54:33.958982 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:33Z","lastTransitionTime":"2025-11-24T08:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.062842 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.062900 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.062911 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.062931 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.062943 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:34Z","lastTransitionTime":"2025-11-24T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.165529 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.165616 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.165635 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.165667 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.165691 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:34Z","lastTransitionTime":"2025-11-24T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.269233 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.269285 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.269297 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.269314 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.269325 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:34Z","lastTransitionTime":"2025-11-24T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.372411 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.372501 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.372517 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.372535 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.372547 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:34Z","lastTransitionTime":"2025-11-24T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.475490 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.475546 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.475559 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.475589 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.475603 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:34Z","lastTransitionTime":"2025-11-24T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.537727 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.552707 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.567505 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.578181 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.578234 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.578247 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.578265 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.578277 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:34Z","lastTransitionTime":"2025-11-24T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.582333 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.598603 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.619028 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\" 6063 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-fvqzq after 0 failed attempt(s)\\\\nI1124 08:54:20.691554 6063 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-fvqzq\\\\nI1124 08:54:20.691102 6063 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:54:20.691571 6063 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:20.691669 6063 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializat\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:31Z\\\",\\\"message\\\":\\\"a1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.764648 6261 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 08:54:31.764819 6261 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.765008 6261 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765125 6261 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765509 6261 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.768498 6261 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:31.768529 6261 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:31.768604 6261 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:31.768629 6261 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:31.768724 6261 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.640840 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.656435 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.669976 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.680987 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.681030 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.681066 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.681084 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.681098 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:34Z","lastTransitionTime":"2025-11-24T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.685743 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.700389 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.711984 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.726443 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.740959 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.770183 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.783012 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.783073 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.783086 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.783108 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.783119 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:34Z","lastTransitionTime":"2025-11-24T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.796176 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:34Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.885835 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.885877 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.885888 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.885905 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.885918 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:34Z","lastTransitionTime":"2025-11-24T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.988756 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.988802 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.988812 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.988829 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:34 crc kubenswrapper[4719]: I1124 08:54:34.988839 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:34Z","lastTransitionTime":"2025-11-24T08:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.091664 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.091755 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.091766 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.091781 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.091793 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:35Z","lastTransitionTime":"2025-11-24T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.194119 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.194162 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.194175 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.194192 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.194207 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:35Z","lastTransitionTime":"2025-11-24T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.297069 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.297114 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.297124 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.297137 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.297146 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:35Z","lastTransitionTime":"2025-11-24T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.400412 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.400489 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.400506 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.400527 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.400541 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:35Z","lastTransitionTime":"2025-11-24T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.505867 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.505891 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.505899 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.505914 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.505924 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:35Z","lastTransitionTime":"2025-11-24T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.520230 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.520271 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.520271 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:35 crc kubenswrapper[4719]: E1124 08:54:35.520857 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.520290 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:35 crc kubenswrapper[4719]: E1124 08:54:35.520942 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:35 crc kubenswrapper[4719]: E1124 08:54:35.520617 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:35 crc kubenswrapper[4719]: E1124 08:54:35.520988 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.609963 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.609991 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.610001 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.610014 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.610023 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:35Z","lastTransitionTime":"2025-11-24T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.712279 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.712534 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.712618 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.712782 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.712866 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:35Z","lastTransitionTime":"2025-11-24T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.815648 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.815694 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.815707 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.815727 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.815741 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:35Z","lastTransitionTime":"2025-11-24T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.918569 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.918621 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.918634 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.918654 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:35 crc kubenswrapper[4719]: I1124 08:54:35.918666 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:35Z","lastTransitionTime":"2025-11-24T08:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.021592 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.021639 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.021648 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.021664 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.021674 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:36Z","lastTransitionTime":"2025-11-24T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.124703 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.125066 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.125187 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.125313 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.125402 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:36Z","lastTransitionTime":"2025-11-24T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.228119 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.228731 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.228836 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.228943 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.229027 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:36Z","lastTransitionTime":"2025-11-24T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.331452 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.331500 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.331510 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.331525 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.331535 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:36Z","lastTransitionTime":"2025-11-24T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.434564 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.434903 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.434987 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.435135 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.435230 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:36Z","lastTransitionTime":"2025-11-24T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.537385 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.537431 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.537443 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.537471 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.537484 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:36Z","lastTransitionTime":"2025-11-24T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.639609 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.639648 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.639657 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.639672 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.639682 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:36Z","lastTransitionTime":"2025-11-24T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.742388 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.742657 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.742718 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.742785 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.742886 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:36Z","lastTransitionTime":"2025-11-24T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.845011 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.845073 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.845087 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.845105 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.845117 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:36Z","lastTransitionTime":"2025-11-24T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.947821 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.948167 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.948234 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.948305 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:36 crc kubenswrapper[4719]: I1124 08:54:36.948401 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:36Z","lastTransitionTime":"2025-11-24T08:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.051382 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.051461 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.051478 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.051505 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.051520 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:37Z","lastTransitionTime":"2025-11-24T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.154607 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.154699 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.154714 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.154759 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.154774 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:37Z","lastTransitionTime":"2025-11-24T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.258080 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.258179 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.258194 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.258219 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.258232 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:37Z","lastTransitionTime":"2025-11-24T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.361437 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.361500 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.361511 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.361530 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.361542 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:37Z","lastTransitionTime":"2025-11-24T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.391382 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.391604 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:09.391563065 +0000 UTC m=+85.722836317 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.464508 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.464569 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.464583 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.464606 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.464621 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:37Z","lastTransitionTime":"2025-11-24T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.492640 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.492742 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs\") pod \"network-metrics-daemon-5hv9d\" (UID: \"bd6beab7-bbb8-4abb-98b1-60c1f8360757\") " pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.492780 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.492887 4719 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.492887 4719 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.492954 4719 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.492981 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:55:09.492953397 +0000 UTC m=+85.824226649 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.493085 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:55:09.49307065 +0000 UTC m=+85.824343902 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.493106 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs podName:bd6beab7-bbb8-4abb-98b1-60c1f8360757 nodeName:}" failed. No retries permitted until 2025-11-24 08:54:53.493095591 +0000 UTC m=+69.824368843 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs") pod "network-metrics-daemon-5hv9d" (UID: "bd6beab7-bbb8-4abb-98b1-60c1f8360757") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.520888 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.520979 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.520888 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.521121 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.520917 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.521214 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.521265 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.521342 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.568414 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.568493 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.568508 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.568532 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.568546 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:37Z","lastTransitionTime":"2025-11-24T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.594304 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.594396 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.594528 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.594548 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.594562 4719 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.594621 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:55:09.594600806 +0000 UTC m=+85.925874058 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.594528 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.594761 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.594773 4719 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:37 crc kubenswrapper[4719]: E1124 08:54:37.594807 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:55:09.594797052 +0000 UTC m=+85.926070304 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.671018 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.671106 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.671119 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.671138 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.671150 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:37Z","lastTransitionTime":"2025-11-24T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.773563 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.773616 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.773628 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.773661 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.773676 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:37Z","lastTransitionTime":"2025-11-24T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.876919 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.877015 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.877028 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.877085 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.877100 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:37Z","lastTransitionTime":"2025-11-24T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.979714 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.979759 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.979772 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.979789 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:37 crc kubenswrapper[4719]: I1124 08:54:37.979830 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:37Z","lastTransitionTime":"2025-11-24T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.082218 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.082283 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.082297 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.082324 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.082342 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:38Z","lastTransitionTime":"2025-11-24T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.184666 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.184713 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.184727 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.184744 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.184757 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:38Z","lastTransitionTime":"2025-11-24T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.287759 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.287820 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.287832 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.287855 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.287869 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:38Z","lastTransitionTime":"2025-11-24T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.390542 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.390587 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.390596 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.390612 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.390622 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:38Z","lastTransitionTime":"2025-11-24T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.492855 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.492904 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.492920 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.492937 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.492948 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:38Z","lastTransitionTime":"2025-11-24T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.595448 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.595481 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.595491 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.595505 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.595517 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:38Z","lastTransitionTime":"2025-11-24T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.698325 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.698357 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.698365 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.698378 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.698387 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:38Z","lastTransitionTime":"2025-11-24T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.800534 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.800625 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.800637 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.800680 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.800695 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:38Z","lastTransitionTime":"2025-11-24T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.903086 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.903139 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.903164 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.903181 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:38 crc kubenswrapper[4719]: I1124 08:54:38.903191 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:38Z","lastTransitionTime":"2025-11-24T08:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.005725 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.005792 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.005803 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.005826 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.005837 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:39Z","lastTransitionTime":"2025-11-24T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.107938 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.107985 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.107995 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.108010 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.108018 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:39Z","lastTransitionTime":"2025-11-24T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.210599 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.210641 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.210651 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.210668 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.210679 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:39Z","lastTransitionTime":"2025-11-24T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.314108 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.314138 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.314148 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.314163 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.314173 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:39Z","lastTransitionTime":"2025-11-24T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.416963 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.417023 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.417038 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.417092 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.417111 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:39Z","lastTransitionTime":"2025-11-24T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.520362 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.520433 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.520381 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:39 crc kubenswrapper[4719]: E1124 08:54:39.520586 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:39 crc kubenswrapper[4719]: E1124 08:54:39.520668 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:39 crc kubenswrapper[4719]: E1124 08:54:39.520724 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.520856 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:39 crc kubenswrapper[4719]: E1124 08:54:39.521243 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.522717 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.522754 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.522767 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.522786 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.522800 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:39Z","lastTransitionTime":"2025-11-24T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.625506 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.625568 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.625581 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.625603 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.625617 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:39Z","lastTransitionTime":"2025-11-24T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.728688 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.728735 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.728747 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.728768 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.728781 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:39Z","lastTransitionTime":"2025-11-24T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.831246 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.831288 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.831314 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.831332 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.831344 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:39Z","lastTransitionTime":"2025-11-24T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.906304 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.924355 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.935382 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.935437 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.935519 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.935545 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.935562 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:39Z","lastTransitionTime":"2025-11-24T08:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.938598 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.950847 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.966119 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:39 crc kubenswrapper[4719]: I1124 08:54:39.992616 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\" 6063 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-fvqzq after 0 failed attempt(s)\\\\nI1124 08:54:20.691554 6063 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-fvqzq\\\\nI1124 08:54:20.691102 6063 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:54:20.691571 6063 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:20.691669 6063 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializat\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:31Z\\\",\\\"message\\\":\\\"a1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.764648 6261 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 08:54:31.764819 6261 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.765008 6261 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765125 6261 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765509 6261 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.768498 6261 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:31.768529 6261 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:31.768604 6261 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:31.768629 6261 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:31.768724 6261 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:39Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.012141 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.026376 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.038629 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.038711 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.038722 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.038741 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.038753 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:40Z","lastTransitionTime":"2025-11-24T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.041989 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.058046 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.071373 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.084105 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.094253 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.106247 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.120383 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.134820 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.141330 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.141362 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.141371 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.141388 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.141398 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:40Z","lastTransitionTime":"2025-11-24T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.148468 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.243323 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.243362 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.243371 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.243386 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.243397 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:40Z","lastTransitionTime":"2025-11-24T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.345666 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.345734 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.345743 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.345756 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.345766 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:40Z","lastTransitionTime":"2025-11-24T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.401982 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.413685 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.420325 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.431271 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.445491 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.447963 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.447995 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.448005 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.448022 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.448033 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:40Z","lastTransitionTime":"2025-11-24T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.459748 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.474404 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.490021 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.504950 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.522699 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.537904 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.550773 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.550814 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.550826 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.550861 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.550872 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:40Z","lastTransitionTime":"2025-11-24T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.551595 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.573689 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04a1045200380234fe027d1a1c781af9305346c8326b97c3ccbc59c37580b88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"message\\\":\\\" 6063 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-fvqzq after 0 failed attempt(s)\\\\nI1124 08:54:20.691554 6063 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-fvqzq\\\\nI1124 08:54:20.691102 6063 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 08:54:20.691571 6063 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:20.691669 6063 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializat\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:31Z\\\",\\\"message\\\":\\\"a1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.764648 6261 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 08:54:31.764819 6261 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.765008 6261 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765125 6261 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765509 6261 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.768498 6261 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:31.768529 6261 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:31.768604 6261 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:31.768629 6261 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:31.768724 6261 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.589271 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.603249 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.618700 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.633483 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.648343 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:40Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.653205 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.653250 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.653260 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.653275 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.653284 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:40Z","lastTransitionTime":"2025-11-24T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.756223 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.756259 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.756271 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.756285 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.756296 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:40Z","lastTransitionTime":"2025-11-24T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.858399 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.858442 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.858455 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.858473 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.858484 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:40Z","lastTransitionTime":"2025-11-24T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.961657 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.961710 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.961720 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.961736 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:40 crc kubenswrapper[4719]: I1124 08:54:40.961746 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:40Z","lastTransitionTime":"2025-11-24T08:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.063548 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.063579 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.063590 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.063605 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.063616 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:41Z","lastTransitionTime":"2025-11-24T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.166566 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.166607 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.166617 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.166634 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.166643 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:41Z","lastTransitionTime":"2025-11-24T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.269638 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.269684 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.269698 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.269715 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.269727 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:41Z","lastTransitionTime":"2025-11-24T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.372837 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.372871 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.372880 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.372895 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.372905 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:41Z","lastTransitionTime":"2025-11-24T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.475869 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.475919 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.475937 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.475955 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.475967 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:41Z","lastTransitionTime":"2025-11-24T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.519715 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.519801 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:41 crc kubenswrapper[4719]: E1124 08:54:41.519847 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.519722 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:41 crc kubenswrapper[4719]: E1124 08:54:41.519958 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.519742 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:41 crc kubenswrapper[4719]: E1124 08:54:41.520089 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:41 crc kubenswrapper[4719]: E1124 08:54:41.520168 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.578707 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.578737 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.578746 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.578759 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.578768 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:41Z","lastTransitionTime":"2025-11-24T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.682331 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.682402 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.682418 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.682440 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.682453 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:41Z","lastTransitionTime":"2025-11-24T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.785528 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.785575 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.785587 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.785605 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.785618 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:41Z","lastTransitionTime":"2025-11-24T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.888302 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.888363 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.888376 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.888413 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.888429 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:41Z","lastTransitionTime":"2025-11-24T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.991086 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.991134 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.991147 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.991168 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:41 crc kubenswrapper[4719]: I1124 08:54:41.991180 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:41Z","lastTransitionTime":"2025-11-24T08:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.094098 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.094152 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.094165 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.094185 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.094196 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:42Z","lastTransitionTime":"2025-11-24T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.196532 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.196569 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.196578 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.196593 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.196602 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:42Z","lastTransitionTime":"2025-11-24T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.299457 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.299493 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.299504 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.299519 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.299528 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:42Z","lastTransitionTime":"2025-11-24T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.402319 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.402404 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.402418 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.402433 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.402443 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:42Z","lastTransitionTime":"2025-11-24T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.505202 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.505243 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.505254 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.505273 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.505283 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:42Z","lastTransitionTime":"2025-11-24T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.607569 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.607615 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.607624 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.607648 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.607658 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:42Z","lastTransitionTime":"2025-11-24T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.683574 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.683610 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.683621 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.683639 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.683648 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:42Z","lastTransitionTime":"2025-11-24T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:42 crc kubenswrapper[4719]: E1124 08:54:42.697431 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.701689 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.701746 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.701758 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.701779 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.701791 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:42Z","lastTransitionTime":"2025-11-24T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:42 crc kubenswrapper[4719]: E1124 08:54:42.715363 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.720003 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.720080 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.720093 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.720113 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.720127 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:42Z","lastTransitionTime":"2025-11-24T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:42 crc kubenswrapper[4719]: E1124 08:54:42.733739 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.738074 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.738110 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.738125 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.738144 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.738157 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:42Z","lastTransitionTime":"2025-11-24T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:42 crc kubenswrapper[4719]: E1124 08:54:42.753153 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.759400 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.759436 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.759446 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.759464 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.759476 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:42Z","lastTransitionTime":"2025-11-24T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:42 crc kubenswrapper[4719]: E1124 08:54:42.775879 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:42Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:42 crc kubenswrapper[4719]: E1124 08:54:42.776022 4719 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.777805 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.777832 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.777846 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.777862 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.777872 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:42Z","lastTransitionTime":"2025-11-24T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.880954 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.881012 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.881021 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.881059 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.881072 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:42Z","lastTransitionTime":"2025-11-24T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.983263 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.983305 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.983316 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.983333 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:42 crc kubenswrapper[4719]: I1124 08:54:42.983346 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:42Z","lastTransitionTime":"2025-11-24T08:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.086015 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.086085 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.086097 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.086115 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.086126 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:43Z","lastTransitionTime":"2025-11-24T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.188427 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.188471 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.188482 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.188500 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.188514 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:43Z","lastTransitionTime":"2025-11-24T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.291447 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.291481 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.291490 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.291504 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.291514 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:43Z","lastTransitionTime":"2025-11-24T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.393552 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.393604 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.393617 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.393637 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.393650 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:43Z","lastTransitionTime":"2025-11-24T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.496665 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.496719 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.496733 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.496753 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.496767 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:43Z","lastTransitionTime":"2025-11-24T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.520243 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.520376 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.520282 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.520314 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:43 crc kubenswrapper[4719]: E1124 08:54:43.520592 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:43 crc kubenswrapper[4719]: E1124 08:54:43.520663 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:43 crc kubenswrapper[4719]: E1124 08:54:43.520774 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:43 crc kubenswrapper[4719]: E1124 08:54:43.521113 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.521488 4719 scope.go:117] "RemoveContainer" containerID="cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2" Nov 24 08:54:43 crc kubenswrapper[4719]: E1124 08:54:43.521657 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.538065 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.556107 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3528880e-64fa-488f-9855-63f67e92abcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4f0b8f7a8ad7727714c244b8431083d624996f15030cea0b94f136aec1052e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21113afe48c6b1841f5739537f425c5e33b0ccc731715fda8a14323b8e5660fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98d3e414bcadf07626d331c7e3c0d8db810268042f59cbaeda09e24832e245a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.571833 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.585716 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.599158 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.599214 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.599223 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.599239 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.599249 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:43Z","lastTransitionTime":"2025-11-24T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.599533 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.615173 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.629586 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.644587 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.658776 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.682729 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:31Z\\\",\\\"message\\\":\\\"a1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.764648 6261 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 08:54:31.764819 6261 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.765008 6261 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765125 6261 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765509 6261 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.768498 6261 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:31.768529 6261 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:31.768604 6261 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:31.768629 6261 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:31.768724 6261 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.697908 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.701700 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.701752 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.701766 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.701787 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.701809 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:43Z","lastTransitionTime":"2025-11-24T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.713187 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.729411 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.744216 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.763381 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.779301 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.796472 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:43Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.804598 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.804666 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.804679 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.804695 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.804706 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:43Z","lastTransitionTime":"2025-11-24T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.907608 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.907661 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.907673 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.907690 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:43 crc kubenswrapper[4719]: I1124 08:54:43.907702 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:43Z","lastTransitionTime":"2025-11-24T08:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.009680 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.009719 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.009729 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.009744 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.009755 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:44Z","lastTransitionTime":"2025-11-24T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.112174 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.112216 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.112246 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.112261 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.112271 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:44Z","lastTransitionTime":"2025-11-24T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.215383 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.215441 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.215451 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.215465 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.215475 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:44Z","lastTransitionTime":"2025-11-24T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.318022 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.318076 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.318087 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.318105 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.318122 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:44Z","lastTransitionTime":"2025-11-24T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.421545 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.421618 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.421634 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.421661 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.421680 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:44Z","lastTransitionTime":"2025-11-24T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.523684 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.523722 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.523735 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.523752 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.523764 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:44Z","lastTransitionTime":"2025-11-24T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.538541 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.553089 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.570198 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.584238 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3528880e-64fa-488f-9855-63f67e92abcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4f0b8f7a8ad7727714c244b8431083d624996f15030cea0b94f136aec1052e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21113afe48c6b1841f5739537f425c5e33b0ccc731715fda8a14323b8e5660fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98d3e414bcadf07626d331c7e3c0d8db810268042f59cbaeda09e24832e245a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.600296 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.616008 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.625775 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.625804 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.625812 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.625826 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.625834 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:44Z","lastTransitionTime":"2025-11-24T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.630763 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.654013 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:31Z\\\",\\\"message\\\":\\\"a1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.764648 6261 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 08:54:31.764819 6261 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.765008 6261 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765125 6261 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765509 6261 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.768498 6261 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:31.768529 6261 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:31.768604 6261 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:31.768629 6261 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:31.768724 6261 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.667737 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.680782 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.694836 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.711413 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.728765 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.728851 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.728865 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.728891 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.728909 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:44Z","lastTransitionTime":"2025-11-24T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.731819 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.749360 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.764823 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.782544 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.800876 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:44Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.831808 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.832163 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.832244 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.832340 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.832424 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:44Z","lastTransitionTime":"2025-11-24T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.936471 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.936515 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.936527 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.936543 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:44 crc kubenswrapper[4719]: I1124 08:54:44.936553 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:44Z","lastTransitionTime":"2025-11-24T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.038733 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.038781 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.038795 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.038811 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.038824 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:45Z","lastTransitionTime":"2025-11-24T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.141345 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.141393 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.141404 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.141421 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.141430 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:45Z","lastTransitionTime":"2025-11-24T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.243914 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.243964 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.243978 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.243997 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.244011 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:45Z","lastTransitionTime":"2025-11-24T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.347572 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.348262 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.348277 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.348316 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.348328 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:45Z","lastTransitionTime":"2025-11-24T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.451295 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.451334 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.451342 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.451356 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.451367 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:45Z","lastTransitionTime":"2025-11-24T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.520567 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:45 crc kubenswrapper[4719]: E1124 08:54:45.520730 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.520822 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:45 crc kubenswrapper[4719]: E1124 08:54:45.520884 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.520970 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:45 crc kubenswrapper[4719]: E1124 08:54:45.521154 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.521249 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:45 crc kubenswrapper[4719]: E1124 08:54:45.521343 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.554346 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.554400 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.554413 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.554432 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.554445 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:45Z","lastTransitionTime":"2025-11-24T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.657177 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.657225 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.657236 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.657254 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.657266 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:45Z","lastTransitionTime":"2025-11-24T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.760224 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.760259 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.760269 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.760289 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.760299 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:45Z","lastTransitionTime":"2025-11-24T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.863603 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.863644 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.863655 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.863672 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.863685 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:45Z","lastTransitionTime":"2025-11-24T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.967133 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.967173 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.967183 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.967197 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:45 crc kubenswrapper[4719]: I1124 08:54:45.967208 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:45Z","lastTransitionTime":"2025-11-24T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.070110 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.070151 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.070181 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.070200 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.070212 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:46Z","lastTransitionTime":"2025-11-24T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.173003 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.173099 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.173115 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.173136 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.173147 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:46Z","lastTransitionTime":"2025-11-24T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.275984 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.276054 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.276070 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.276091 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.276105 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:46Z","lastTransitionTime":"2025-11-24T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.380207 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.380272 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.380282 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.380300 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.380319 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:46Z","lastTransitionTime":"2025-11-24T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.484301 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.484356 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.484369 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.484395 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.484417 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:46Z","lastTransitionTime":"2025-11-24T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.589433 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.589482 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.589494 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.589512 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.589525 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:46Z","lastTransitionTime":"2025-11-24T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.692506 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.692553 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.692565 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.692581 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.692592 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:46Z","lastTransitionTime":"2025-11-24T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.794460 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.794497 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.794507 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.794521 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.794530 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:46Z","lastTransitionTime":"2025-11-24T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.897520 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.897578 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.897675 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.897694 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:46 crc kubenswrapper[4719]: I1124 08:54:46.897705 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:46Z","lastTransitionTime":"2025-11-24T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.000560 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.000598 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.000609 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.000628 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.000641 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:47Z","lastTransitionTime":"2025-11-24T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.102943 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.103027 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.103054 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.103071 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.103083 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:47Z","lastTransitionTime":"2025-11-24T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.205236 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.205280 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.205292 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.205309 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.205320 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:47Z","lastTransitionTime":"2025-11-24T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.307646 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.307674 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.307682 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.307695 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.307704 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:47Z","lastTransitionTime":"2025-11-24T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.411011 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.411054 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.411064 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.411078 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.411088 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:47Z","lastTransitionTime":"2025-11-24T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.513736 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.513772 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.513783 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.513799 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.513810 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:47Z","lastTransitionTime":"2025-11-24T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.520217 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.520242 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:47 crc kubenswrapper[4719]: E1124 08:54:47.520329 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.520425 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:47 crc kubenswrapper[4719]: E1124 08:54:47.520510 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.520530 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:47 crc kubenswrapper[4719]: E1124 08:54:47.520606 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:47 crc kubenswrapper[4719]: E1124 08:54:47.520761 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.615759 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.615786 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.615796 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.615808 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.615818 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:47Z","lastTransitionTime":"2025-11-24T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.719319 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.719351 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.719359 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.719373 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.719385 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:47Z","lastTransitionTime":"2025-11-24T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.821923 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.822384 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.822401 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.822418 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.822518 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:47Z","lastTransitionTime":"2025-11-24T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.925342 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.925376 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.925385 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.925400 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:47 crc kubenswrapper[4719]: I1124 08:54:47.925409 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:47Z","lastTransitionTime":"2025-11-24T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.027449 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.027482 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.027494 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.027508 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.027519 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:48Z","lastTransitionTime":"2025-11-24T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.130311 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.130624 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.130726 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.130843 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.130924 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:48Z","lastTransitionTime":"2025-11-24T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.233788 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.234494 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.234534 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.234559 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.234580 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:48Z","lastTransitionTime":"2025-11-24T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.338692 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.338739 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.338751 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.338768 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.338781 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:48Z","lastTransitionTime":"2025-11-24T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.441565 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.441605 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.441616 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.441633 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.441644 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:48Z","lastTransitionTime":"2025-11-24T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.544816 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.544868 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.544880 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.544897 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.544909 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:48Z","lastTransitionTime":"2025-11-24T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.648110 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.648195 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.648208 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.648228 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.648240 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:48Z","lastTransitionTime":"2025-11-24T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.750453 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.750501 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.750511 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.750528 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.750573 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:48Z","lastTransitionTime":"2025-11-24T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.853571 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.853613 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.853622 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.853638 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.853647 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:48Z","lastTransitionTime":"2025-11-24T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.956860 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.956937 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.956952 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.957131 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:48 crc kubenswrapper[4719]: I1124 08:54:48.957157 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:48Z","lastTransitionTime":"2025-11-24T08:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.059484 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.059527 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.059538 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.059557 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.059568 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:49Z","lastTransitionTime":"2025-11-24T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.162059 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.162112 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.162124 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.162142 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.162154 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:49Z","lastTransitionTime":"2025-11-24T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.264973 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.265019 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.265033 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.265070 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.265081 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:49Z","lastTransitionTime":"2025-11-24T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.367600 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.367924 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.368072 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.368185 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.368268 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:49Z","lastTransitionTime":"2025-11-24T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.470330 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.470364 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.470372 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.470387 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.470397 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:49Z","lastTransitionTime":"2025-11-24T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.519826 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:49 crc kubenswrapper[4719]: E1124 08:54:49.519978 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.520202 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.520285 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.520435 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:49 crc kubenswrapper[4719]: E1124 08:54:49.520451 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:49 crc kubenswrapper[4719]: E1124 08:54:49.520510 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:49 crc kubenswrapper[4719]: E1124 08:54:49.520565 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.573612 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.573674 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.573688 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.573707 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.573721 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:49Z","lastTransitionTime":"2025-11-24T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.676634 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.676715 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.676726 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.676742 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.676752 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:49Z","lastTransitionTime":"2025-11-24T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.779640 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.779681 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.779692 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.779708 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.779718 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:49Z","lastTransitionTime":"2025-11-24T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.883354 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.883412 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.883423 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.883441 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.883457 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:49Z","lastTransitionTime":"2025-11-24T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.986847 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.986933 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.986947 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.986967 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:49 crc kubenswrapper[4719]: I1124 08:54:49.986981 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:49Z","lastTransitionTime":"2025-11-24T08:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.089596 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.089645 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.089663 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.089685 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.089696 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:50Z","lastTransitionTime":"2025-11-24T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.192884 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.192934 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.192946 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.192966 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.192979 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:50Z","lastTransitionTime":"2025-11-24T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.296402 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.296445 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.296456 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.296475 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.296487 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:50Z","lastTransitionTime":"2025-11-24T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.400056 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.400091 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.400102 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.400123 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.400134 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:50Z","lastTransitionTime":"2025-11-24T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.502214 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.502281 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.502297 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.502357 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.502370 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:50Z","lastTransitionTime":"2025-11-24T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.534656 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.604452 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.604480 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.604489 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.604503 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.604516 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:50Z","lastTransitionTime":"2025-11-24T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.707528 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.707575 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.707587 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.707610 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.707622 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:50Z","lastTransitionTime":"2025-11-24T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.809883 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.809921 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.809932 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.809948 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.809961 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:50Z","lastTransitionTime":"2025-11-24T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.912957 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.913023 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.913057 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.913082 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:50 crc kubenswrapper[4719]: I1124 08:54:50.913099 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:50Z","lastTransitionTime":"2025-11-24T08:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.015734 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.015758 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.015767 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.015779 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.015789 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:51Z","lastTransitionTime":"2025-11-24T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.118589 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.118636 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.118647 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.118665 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.118676 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:51Z","lastTransitionTime":"2025-11-24T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.220808 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.220862 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.220872 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.220888 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.220898 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:51Z","lastTransitionTime":"2025-11-24T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.323974 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.324021 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.324050 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.324068 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.324079 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:51Z","lastTransitionTime":"2025-11-24T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.426476 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.426519 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.426530 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.426545 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.426556 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:51Z","lastTransitionTime":"2025-11-24T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.520094 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.520140 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.520278 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.520406 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:51 crc kubenswrapper[4719]: E1124 08:54:51.520393 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:51 crc kubenswrapper[4719]: E1124 08:54:51.520491 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:51 crc kubenswrapper[4719]: E1124 08:54:51.520597 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:51 crc kubenswrapper[4719]: E1124 08:54:51.520693 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.528140 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.528176 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.528186 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.528201 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.528254 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:51Z","lastTransitionTime":"2025-11-24T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.631110 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.631157 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.631167 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.631186 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.631198 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:51Z","lastTransitionTime":"2025-11-24T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.734378 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.734444 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.734458 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.734479 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.734493 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:51Z","lastTransitionTime":"2025-11-24T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.837487 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.837547 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.837563 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.837588 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.837602 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:51Z","lastTransitionTime":"2025-11-24T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.940250 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.940296 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.940309 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.940332 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:51 crc kubenswrapper[4719]: I1124 08:54:51.940346 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:51Z","lastTransitionTime":"2025-11-24T08:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.043621 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.043655 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.043664 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.043678 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.043688 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:52Z","lastTransitionTime":"2025-11-24T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.146448 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.146497 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.146510 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.146529 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.146541 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:52Z","lastTransitionTime":"2025-11-24T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.249405 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.249459 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.249472 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.249489 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.249501 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:52Z","lastTransitionTime":"2025-11-24T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.351884 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.351926 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.351938 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.351955 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.351968 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:52Z","lastTransitionTime":"2025-11-24T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.454500 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.454549 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.454561 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.454580 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.454593 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:52Z","lastTransitionTime":"2025-11-24T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.556877 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.556921 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.556932 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.556949 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.556959 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:52Z","lastTransitionTime":"2025-11-24T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.659957 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.660005 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.660015 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.660057 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.660072 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:52Z","lastTransitionTime":"2025-11-24T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.764318 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.764590 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.764608 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.764635 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.764652 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:52Z","lastTransitionTime":"2025-11-24T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.868016 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.868146 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.868167 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.868187 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.868202 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:52Z","lastTransitionTime":"2025-11-24T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.968998 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.969055 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.969065 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.969080 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.969090 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:52Z","lastTransitionTime":"2025-11-24T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:52 crc kubenswrapper[4719]: E1124 08:54:52.985131 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:52Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.995943 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.995990 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.996006 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.996028 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:52 crc kubenswrapper[4719]: I1124 08:54:52.996058 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:52Z","lastTransitionTime":"2025-11-24T08:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:53 crc kubenswrapper[4719]: E1124 08:54:53.011684 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:53Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.017658 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.017726 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.017740 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.017764 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.017781 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:53Z","lastTransitionTime":"2025-11-24T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:53 crc kubenswrapper[4719]: E1124 08:54:53.034312 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:53Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.040899 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.040976 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.040990 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.041017 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.041075 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:53Z","lastTransitionTime":"2025-11-24T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:53 crc kubenswrapper[4719]: E1124 08:54:53.058326 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:53Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.064172 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.064261 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.064276 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.064301 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.064492 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:53Z","lastTransitionTime":"2025-11-24T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:53 crc kubenswrapper[4719]: E1124 08:54:53.080253 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:53Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:53 crc kubenswrapper[4719]: E1124 08:54:53.080402 4719 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.082306 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.082337 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.082350 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.082369 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.082382 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:53Z","lastTransitionTime":"2025-11-24T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.185592 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.185637 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.185646 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.185662 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.185674 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:53Z","lastTransitionTime":"2025-11-24T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.288869 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.288941 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.288958 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.288980 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.288993 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:53Z","lastTransitionTime":"2025-11-24T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.391530 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.391564 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.391574 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.391589 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.391601 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:53Z","lastTransitionTime":"2025-11-24T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.494900 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.494957 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.494972 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.494997 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.495013 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:53Z","lastTransitionTime":"2025-11-24T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.520849 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.520964 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.521005 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.520986 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:53 crc kubenswrapper[4719]: E1124 08:54:53.521144 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:53 crc kubenswrapper[4719]: E1124 08:54:53.521290 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:53 crc kubenswrapper[4719]: E1124 08:54:53.521459 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:53 crc kubenswrapper[4719]: E1124 08:54:53.521568 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.566819 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs\") pod \"network-metrics-daemon-5hv9d\" (UID: \"bd6beab7-bbb8-4abb-98b1-60c1f8360757\") " pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:53 crc kubenswrapper[4719]: E1124 08:54:53.567083 4719 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:54:53 crc kubenswrapper[4719]: E1124 08:54:53.567208 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs podName:bd6beab7-bbb8-4abb-98b1-60c1f8360757 nodeName:}" failed. No retries permitted until 2025-11-24 08:55:25.567183661 +0000 UTC m=+101.898456993 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs") pod "network-metrics-daemon-5hv9d" (UID: "bd6beab7-bbb8-4abb-98b1-60c1f8360757") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.597817 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.597863 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.597876 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.597894 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.597908 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:53Z","lastTransitionTime":"2025-11-24T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.700716 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.700760 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.700772 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.700787 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.700796 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:53Z","lastTransitionTime":"2025-11-24T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.803840 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.803874 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.803884 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.803899 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.803909 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:53Z","lastTransitionTime":"2025-11-24T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.906071 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.906104 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.906114 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.906129 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:53 crc kubenswrapper[4719]: I1124 08:54:53.906139 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:53Z","lastTransitionTime":"2025-11-24T08:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.008514 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.008562 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.008575 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.008596 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.008609 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:54Z","lastTransitionTime":"2025-11-24T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.111389 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.111424 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.111436 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.111451 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.111462 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:54Z","lastTransitionTime":"2025-11-24T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.214098 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.214135 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.214145 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.214161 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.214172 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:54Z","lastTransitionTime":"2025-11-24T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.317662 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.317710 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.317720 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.317736 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.317746 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:54Z","lastTransitionTime":"2025-11-24T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.420772 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.420819 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.420829 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.420845 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.420855 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:54Z","lastTransitionTime":"2025-11-24T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.524110 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.524154 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.524165 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.524183 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.524195 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:54Z","lastTransitionTime":"2025-11-24T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.539166 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.556571 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.572421 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3528880e-64fa-488f-9855-63f67e92abcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4f0b8f7a8ad7727714c244b8431083d624996f15030cea0b94f136aec1052e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21113afe48c6b1841f5739537f425c5e33b0ccc731715fda8a14323b8e5660fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98d3e414bcadf07626d331c7e3c0d8db810268042f59cbaeda09e24832e245a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.588536 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.601302 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.616483 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.626512 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.626562 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.626578 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.626636 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.626655 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:54Z","lastTransitionTime":"2025-11-24T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.638907 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.654125 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.667364 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.687455 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:31Z\\\",\\\"message\\\":\\\"a1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.764648 6261 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 08:54:31.764819 6261 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.765008 6261 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765125 6261 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765509 6261 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.768498 6261 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:31.768529 6261 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:31.768604 6261 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:31.768629 6261 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:31.768724 6261 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.700453 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.716578 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.729485 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.729532 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.729544 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.729563 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.729574 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:54Z","lastTransitionTime":"2025-11-24T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.731328 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc33a3fc-4a67-4684-bef5-b433908724fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1964a454309c33ecb8ec0042942827fc0c84ba793fdd83b77d70294adc7abbfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.747413 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.763687 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.779823 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.794837 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.812846 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:54Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.831821 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.831871 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.831883 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.831902 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.831916 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:54Z","lastTransitionTime":"2025-11-24T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.934606 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.934647 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.934659 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.934677 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:54 crc kubenswrapper[4719]: I1124 08:54:54.934688 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:54Z","lastTransitionTime":"2025-11-24T08:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.038026 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.038091 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.038116 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.038137 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.038149 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:55Z","lastTransitionTime":"2025-11-24T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.140662 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.140718 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.140733 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.140754 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.140769 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:55Z","lastTransitionTime":"2025-11-24T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.243325 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.243608 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.243694 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.243826 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.243912 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:55Z","lastTransitionTime":"2025-11-24T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.345832 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.345875 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.345887 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.345904 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.345917 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:55Z","lastTransitionTime":"2025-11-24T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.448342 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.448387 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.448406 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.448427 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.448439 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:55Z","lastTransitionTime":"2025-11-24T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.520527 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.520547 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.520567 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.520810 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:55 crc kubenswrapper[4719]: E1124 08:54:55.520814 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:55 crc kubenswrapper[4719]: E1124 08:54:55.520873 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:55 crc kubenswrapper[4719]: E1124 08:54:55.520921 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:55 crc kubenswrapper[4719]: E1124 08:54:55.520960 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.521553 4719 scope.go:117] "RemoveContainer" containerID="cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.553016 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.553067 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.553079 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.553114 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.553125 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:55Z","lastTransitionTime":"2025-11-24T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.656591 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.656630 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.656665 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.656684 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.656695 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:55Z","lastTransitionTime":"2025-11-24T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.759316 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.759366 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.759375 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.759406 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.759417 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:55Z","lastTransitionTime":"2025-11-24T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.861915 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.861967 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.861980 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.861998 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.862012 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:55Z","lastTransitionTime":"2025-11-24T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.964575 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.964628 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.964642 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.964660 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:55 crc kubenswrapper[4719]: I1124 08:54:55.964674 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:55Z","lastTransitionTime":"2025-11-24T08:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.031864 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/2.log" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.035391 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerStarted","Data":"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9"} Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.035845 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.067571 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.067619 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.067636 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.067656 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.067668 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:56Z","lastTransitionTime":"2025-11-24T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.073063 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.094253 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.114762 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:31Z\\\",\\\"message\\\":\\\"a1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.764648 6261 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 08:54:31.764819 6261 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.765008 6261 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765125 6261 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765509 6261 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.768498 6261 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:31.768529 6261 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:31.768604 6261 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:31.768629 6261 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:31.768724 6261 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.132370 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.146788 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.165344 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.170005 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.170278 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.170431 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.170540 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.170639 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:56Z","lastTransitionTime":"2025-11-24T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.184601 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.205558 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.221806 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.237347 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.250859 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc33a3fc-4a67-4684-bef5-b433908724fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1964a454309c33ecb8ec0042942827fc0c84ba793fdd83b77d70294adc7abbfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.269756 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.273934 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.273998 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.274012 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.274066 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.274082 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:56Z","lastTransitionTime":"2025-11-24T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.287761 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.308948 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.324458 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.345363 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.368240 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3528880e-64fa-488f-9855-63f67e92abcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4f0b8f7a8ad7727714c244b8431083d624996f15030cea0b94f136aec1052e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21113afe48c6b1841f5739537f425c5e33b0ccc731715fda8a14323b8e5660fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98d3e414bcadf07626d331c7e3c0d8db810268042f59cbaeda09e24832e245a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.376670 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.377107 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.377202 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.377279 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.377355 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:56Z","lastTransitionTime":"2025-11-24T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.390439 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:56Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.485990 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.486083 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.486097 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.486122 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.486136 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:56Z","lastTransitionTime":"2025-11-24T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.588767 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.588826 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.588843 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.588870 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.588887 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:56Z","lastTransitionTime":"2025-11-24T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.691677 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.691732 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.691746 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.691767 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.691782 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:56Z","lastTransitionTime":"2025-11-24T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.794997 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.795060 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.795073 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.795093 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.795104 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:56Z","lastTransitionTime":"2025-11-24T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.897958 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.898005 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.898017 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.898049 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:56 crc kubenswrapper[4719]: I1124 08:54:56.898062 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:56Z","lastTransitionTime":"2025-11-24T08:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.001537 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.001565 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.001574 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.001590 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.001600 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:57Z","lastTransitionTime":"2025-11-24T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.040882 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/3.log" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.041696 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/2.log" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.045195 4719 generic.go:334] "Generic (PLEG): container finished" podID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerID="c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9" exitCode=1 Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.045263 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerDied","Data":"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9"} Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.045328 4719 scope.go:117] "RemoveContainer" containerID="cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.047826 4719 scope.go:117] "RemoveContainer" containerID="c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9" Nov 24 08:54:57 crc kubenswrapper[4719]: E1124 08:54:57.048184 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.062138 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.077828 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.095028 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.105102 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.105152 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.105169 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.105197 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.105211 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:57Z","lastTransitionTime":"2025-11-24T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.114714 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3528880e-64fa-488f-9855-63f67e92abcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4f0b8f7a8ad7727714c244b8431083d624996f15030cea0b94f136aec1052e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21113afe48c6b1841f5739537f425c5e33b0ccc731715fda8a14323b8e5660fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98d3e414bcadf07626d331c7e3c0d8db810268042f59cbaeda09e24832e245a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.130229 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.148971 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.163314 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.188470 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:31Z\\\",\\\"message\\\":\\\"a1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.764648 6261 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 08:54:31.764819 6261 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.765008 6261 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765125 6261 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765509 6261 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.768498 6261 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:31.768529 6261 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:31.768604 6261 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:31.768629 6261 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:31.768724 6261 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:56Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:56.514988 6550 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515064 6550 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515015 6550 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515958 6550 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 08:54:56.516018 6550 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 08:54:56.516122 6550 factory.go:656] Stopping watch factory\\\\nI1124 08:54:56.516154 6550 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 08:54:56.516164 6550 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 08:54:56.526872 6550 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:56.526909 6550 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:56.526995 6550 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:56.527030 6550 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:56.527181 6550 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.203680 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.208609 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.208649 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.208661 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.208678 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.208687 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:57Z","lastTransitionTime":"2025-11-24T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.217353 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.236118 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.251506 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.268076 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.285108 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.299970 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.313425 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.313452 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.313459 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.313473 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.313482 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:57Z","lastTransitionTime":"2025-11-24T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.317248 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc33a3fc-4a67-4684-bef5-b433908724fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1964a454309c33ecb8ec0042942827fc0c84ba793fdd83b77d70294adc7abbfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.334968 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.353092 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:57Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.416155 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.416220 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.416232 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.416251 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.416263 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:57Z","lastTransitionTime":"2025-11-24T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.519982 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.520088 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.520214 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.520254 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.520266 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.520281 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:57 crc kubenswrapper[4719]: E1124 08:54:57.520298 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.520298 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:57Z","lastTransitionTime":"2025-11-24T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.520407 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.520442 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:57 crc kubenswrapper[4719]: E1124 08:54:57.520498 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:57 crc kubenswrapper[4719]: E1124 08:54:57.520536 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:57 crc kubenswrapper[4719]: E1124 08:54:57.520588 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.623513 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.623581 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.623595 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.623624 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.623641 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:57Z","lastTransitionTime":"2025-11-24T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.725202 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.725248 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.725259 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.725276 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.725288 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:57Z","lastTransitionTime":"2025-11-24T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.827835 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.827886 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.827902 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.827923 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.827941 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:57Z","lastTransitionTime":"2025-11-24T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.930484 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.930538 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.930548 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.930567 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:57 crc kubenswrapper[4719]: I1124 08:54:57.930578 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:57Z","lastTransitionTime":"2025-11-24T08:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.033344 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.033403 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.033415 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.033433 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.033444 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:58Z","lastTransitionTime":"2025-11-24T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.049394 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v8ghd_1e9122c9-57ef-4b8f-92a8-593533891255/kube-multus/0.log" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.049443 4719 generic.go:334] "Generic (PLEG): container finished" podID="1e9122c9-57ef-4b8f-92a8-593533891255" containerID="89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452" exitCode=1 Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.049492 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v8ghd" event={"ID":"1e9122c9-57ef-4b8f-92a8-593533891255","Type":"ContainerDied","Data":"89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452"} Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.049927 4719 scope.go:117] "RemoveContainer" containerID="89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.053750 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/3.log" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.057360 4719 scope.go:117] "RemoveContainer" containerID="c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9" Nov 24 08:54:58 crc kubenswrapper[4719]: E1124 08:54:58.057530 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.062890 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.081987 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3528880e-64fa-488f-9855-63f67e92abcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4f0b8f7a8ad7727714c244b8431083d624996f15030cea0b94f136aec1052e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21113afe48c6b1841f5739537f425c5e33b0ccc731715fda8a14323b8e5660fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98d3e414bcadf07626d331c7e3c0d8db810268042f59cbaeda09e24832e245a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.098080 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.112268 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.128948 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.137784 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.137875 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.137890 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.137910 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.137922 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:58Z","lastTransitionTime":"2025-11-24T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.142497 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.159822 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.175348 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.195785 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.210401 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.236184 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebb98b1415ad4c7d203bd2df99c61cb3f5b51bee81485160909284f90ed51b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:31Z\\\",\\\"message\\\":\\\"a1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.764648 6261 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 08:54:31.764819 6261 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:31.765008 6261 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765125 6261 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.765509 6261 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:31.768498 6261 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:31.768529 6261 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:31.768604 6261 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:31.768629 6261 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:31.768724 6261 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:56Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:56.514988 6550 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515064 6550 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515015 6550 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515958 6550 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 08:54:56.516018 6550 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 08:54:56.516122 6550 factory.go:656] Stopping watch factory\\\\nI1124 08:54:56.516154 6550 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 08:54:56.516164 6550 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 08:54:56.526872 6550 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:56.526909 6550 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:56.526995 6550 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:56.527030 6550 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:56.527181 6550 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.240484 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.240676 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.240770 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.240866 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.240941 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:58Z","lastTransitionTime":"2025-11-24T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.250331 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc33a3fc-4a67-4684-bef5-b433908724fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1964a454309c33ecb8ec0042942827fc0c84ba793fdd83b77d70294adc7abbfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.264532 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.279445 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.300165 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.316938 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:57Z\\\",\\\"message\\\":\\\"2025-11-24T08:54:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7b03f58a-d50e-4998-b618-739de16030d1\\\\n2025-11-24T08:54:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7b03f58a-d50e-4998-b618-739de16030d1 to /host/opt/cni/bin/\\\\n2025-11-24T08:54:12Z [verbose] multus-daemon started\\\\n2025-11-24T08:54:12Z [verbose] Readiness Indicator file check\\\\n2025-11-24T08:54:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.329918 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.343348 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.343379 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.343390 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.343424 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.343437 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:58Z","lastTransitionTime":"2025-11-24T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.344523 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.358617 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.376026 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.390650 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3528880e-64fa-488f-9855-63f67e92abcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4f0b8f7a8ad7727714c244b8431083d624996f15030cea0b94f136aec1052e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21113afe48c6b1841f5739537f425c5e33b0ccc731715fda8a14323b8e5660fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98d3e414bcadf07626d331c7e3c0d8db810268042f59cbaeda09e24832e245a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.418246 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.430602 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.442018 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.446626 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.446696 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.446706 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.446722 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.446735 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:58Z","lastTransitionTime":"2025-11-24T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.466323 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:56Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:56.514988 6550 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515064 6550 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515015 6550 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515958 6550 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 08:54:56.516018 6550 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 08:54:56.516122 6550 factory.go:656] Stopping watch factory\\\\nI1124 08:54:56.516154 6550 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 08:54:56.516164 6550 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 08:54:56.526872 6550 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:56.526909 6550 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:56.526995 6550 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:56.527030 6550 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:56.527181 6550 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.483561 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.497993 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.516195 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.533910 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.548874 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.548923 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.548936 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.548956 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.548967 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:58Z","lastTransitionTime":"2025-11-24T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.555181 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.572213 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:57Z\\\",\\\"message\\\":\\\"2025-11-24T08:54:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7b03f58a-d50e-4998-b618-739de16030d1\\\\n2025-11-24T08:54:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7b03f58a-d50e-4998-b618-739de16030d1 to /host/opt/cni/bin/\\\\n2025-11-24T08:54:12Z [verbose] multus-daemon started\\\\n2025-11-24T08:54:12Z [verbose] Readiness Indicator file check\\\\n2025-11-24T08:54:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.585528 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.601270 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc33a3fc-4a67-4684-bef5-b433908724fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1964a454309c33ecb8ec0042942827fc0c84ba793fdd83b77d70294adc7abbfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.618112 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.635009 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.653060 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.653119 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.653133 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.653155 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.653169 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:58Z","lastTransitionTime":"2025-11-24T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.653984 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:58Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.757291 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.757348 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.757360 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.757377 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.757390 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:58Z","lastTransitionTime":"2025-11-24T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.859983 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.860019 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.860028 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.860064 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.860077 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:58Z","lastTransitionTime":"2025-11-24T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.962838 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.962874 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.962884 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.962900 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:58 crc kubenswrapper[4719]: I1124 08:54:58.962911 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:58Z","lastTransitionTime":"2025-11-24T08:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.062161 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v8ghd_1e9122c9-57ef-4b8f-92a8-593533891255/kube-multus/0.log" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.062226 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v8ghd" event={"ID":"1e9122c9-57ef-4b8f-92a8-593533891255","Type":"ContainerStarted","Data":"6bfb8a0689605bb34e2409cb37e1feb999c406f1d39df1fae17d8839dd58e911"} Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.064794 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.064833 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.064846 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.064862 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.065252 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:59Z","lastTransitionTime":"2025-11-24T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.075477 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.093531 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:56Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:56.514988 6550 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515064 6550 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515015 6550 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515958 6550 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 08:54:56.516018 6550 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 08:54:56.516122 6550 factory.go:656] Stopping watch factory\\\\nI1124 08:54:56.516154 6550 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 08:54:56.516164 6550 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 08:54:56.526872 6550 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:56.526909 6550 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:56.526995 6550 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:56.527030 6550 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:56.527181 6550 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.109254 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.122122 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.139462 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.153912 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.168112 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.168429 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.168450 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.168460 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.168478 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.168491 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:59Z","lastTransitionTime":"2025-11-24T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.181533 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bfb8a0689605bb34e2409cb37e1feb999c406f1d39df1fae17d8839dd58e911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:57Z\\\",\\\"message\\\":\\\"2025-11-24T08:54:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7b03f58a-d50e-4998-b618-739de16030d1\\\\n2025-11-24T08:54:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7b03f58a-d50e-4998-b618-739de16030d1 to /host/opt/cni/bin/\\\\n2025-11-24T08:54:12Z [verbose] multus-daemon started\\\\n2025-11-24T08:54:12Z [verbose] Readiness Indicator file check\\\\n2025-11-24T08:54:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.192334 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.205987 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc33a3fc-4a67-4684-bef5-b433908724fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1964a454309c33ecb8ec0042942827fc0c84ba793fdd83b77d70294adc7abbfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.218069 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.229314 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.244270 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.257562 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.270595 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.270627 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.270637 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.270652 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.270663 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:59Z","lastTransitionTime":"2025-11-24T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.274797 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.290074 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3528880e-64fa-488f-9855-63f67e92abcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4f0b8f7a8ad7727714c244b8431083d624996f15030cea0b94f136aec1052e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21113afe48c6b1841f5739537f425c5e33b0ccc731715fda8a14323b8e5660fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98d3e414bcadf07626d331c7e3c0d8db810268042f59cbaeda09e24832e245a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.303942 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.318138 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:54:59Z is after 2025-08-24T17:21:41Z" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.372933 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.372972 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.372983 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.372999 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.373012 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:59Z","lastTransitionTime":"2025-11-24T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.475842 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.475902 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.475914 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.475935 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.475948 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:59Z","lastTransitionTime":"2025-11-24T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.520635 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.520789 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:54:59 crc kubenswrapper[4719]: E1124 08:54:59.520912 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.520965 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.520990 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:54:59 crc kubenswrapper[4719]: E1124 08:54:59.521085 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:54:59 crc kubenswrapper[4719]: E1124 08:54:59.521146 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:54:59 crc kubenswrapper[4719]: E1124 08:54:59.521364 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.578584 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.578643 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.578664 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.578683 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.578703 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:59Z","lastTransitionTime":"2025-11-24T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.681302 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.681359 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.681368 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.681381 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.681390 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:59Z","lastTransitionTime":"2025-11-24T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.783656 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.783695 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.783704 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.783718 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.783728 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:59Z","lastTransitionTime":"2025-11-24T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.887189 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.887245 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.887256 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.887275 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.887288 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:59Z","lastTransitionTime":"2025-11-24T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.989810 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.989850 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.989864 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.989878 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:54:59 crc kubenswrapper[4719]: I1124 08:54:59.989887 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:54:59Z","lastTransitionTime":"2025-11-24T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.092608 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.092654 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.092667 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.092684 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.092696 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:00Z","lastTransitionTime":"2025-11-24T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.195708 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.195754 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.195766 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.195785 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.195797 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:00Z","lastTransitionTime":"2025-11-24T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.298120 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.298162 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.298174 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.298239 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.298267 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:00Z","lastTransitionTime":"2025-11-24T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.400389 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.400641 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.400843 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.400997 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.401123 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:00Z","lastTransitionTime":"2025-11-24T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.503539 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.503843 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.503965 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.504065 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.504166 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:00Z","lastTransitionTime":"2025-11-24T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.606966 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.607012 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.607024 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.607056 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.607069 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:00Z","lastTransitionTime":"2025-11-24T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.709498 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.709555 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.709565 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.709580 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.709589 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:00Z","lastTransitionTime":"2025-11-24T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.811801 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.812156 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.812245 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.812336 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.812401 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:00Z","lastTransitionTime":"2025-11-24T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.915706 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.915770 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.915780 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.915796 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:00 crc kubenswrapper[4719]: I1124 08:55:00.915806 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:00Z","lastTransitionTime":"2025-11-24T08:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.018966 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.019677 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.019726 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.019748 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.019762 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:01Z","lastTransitionTime":"2025-11-24T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.122665 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.122707 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.122717 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.122731 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.122742 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:01Z","lastTransitionTime":"2025-11-24T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.225352 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.225392 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.225401 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.225418 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.225427 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:01Z","lastTransitionTime":"2025-11-24T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.328110 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.328147 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.328158 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.328174 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.328187 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:01Z","lastTransitionTime":"2025-11-24T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.430983 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.431049 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.431064 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.431117 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.431131 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:01Z","lastTransitionTime":"2025-11-24T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.520263 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.520381 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.520263 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.520322 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:01 crc kubenswrapper[4719]: E1124 08:55:01.520453 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:01 crc kubenswrapper[4719]: E1124 08:55:01.520504 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:01 crc kubenswrapper[4719]: E1124 08:55:01.520631 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:01 crc kubenswrapper[4719]: E1124 08:55:01.520734 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.533452 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.533504 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.533516 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.533536 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.533548 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:01Z","lastTransitionTime":"2025-11-24T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.635968 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.636013 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.636026 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.636065 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.636078 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:01Z","lastTransitionTime":"2025-11-24T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.738400 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.738443 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.738453 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.738468 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.738481 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:01Z","lastTransitionTime":"2025-11-24T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.841752 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.841784 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.841793 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.841810 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.841821 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:01Z","lastTransitionTime":"2025-11-24T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.943819 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.943876 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.943887 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.943912 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:01 crc kubenswrapper[4719]: I1124 08:55:01.943924 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:01Z","lastTransitionTime":"2025-11-24T08:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.046804 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.046856 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.046868 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.046886 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.046896 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:02Z","lastTransitionTime":"2025-11-24T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.150186 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.150270 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.150286 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.150304 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.150314 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:02Z","lastTransitionTime":"2025-11-24T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.253242 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.253294 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.253304 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.253318 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.253327 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:02Z","lastTransitionTime":"2025-11-24T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.356484 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.356541 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.356556 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.356573 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.356583 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:02Z","lastTransitionTime":"2025-11-24T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.459437 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.459475 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.459485 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.459500 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.459510 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:02Z","lastTransitionTime":"2025-11-24T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.562404 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.562457 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.562471 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.562490 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.562501 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:02Z","lastTransitionTime":"2025-11-24T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.665688 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.665757 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.665768 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.665787 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.665799 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:02Z","lastTransitionTime":"2025-11-24T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.768108 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.768162 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.768171 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.768188 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.768199 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:02Z","lastTransitionTime":"2025-11-24T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.871004 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.871064 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.871075 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.871092 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.871103 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:02Z","lastTransitionTime":"2025-11-24T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.973966 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.974012 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.974022 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.974055 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:02 crc kubenswrapper[4719]: I1124 08:55:02.974083 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:02Z","lastTransitionTime":"2025-11-24T08:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.076648 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.076694 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.076704 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.076729 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.076749 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:03Z","lastTransitionTime":"2025-11-24T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.179696 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.179746 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.179756 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.179772 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.179783 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:03Z","lastTransitionTime":"2025-11-24T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.281974 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.282021 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.282052 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.282077 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.282099 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:03Z","lastTransitionTime":"2025-11-24T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.384547 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.384594 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.384605 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.384625 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.384640 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:03Z","lastTransitionTime":"2025-11-24T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.429006 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.429094 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.429109 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.429139 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.429152 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:03Z","lastTransitionTime":"2025-11-24T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:03 crc kubenswrapper[4719]: E1124 08:55:03.452516 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.457599 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.457656 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.457668 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.457687 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.457697 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:03Z","lastTransitionTime":"2025-11-24T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:03 crc kubenswrapper[4719]: E1124 08:55:03.471110 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.474905 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.474966 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.474980 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.474998 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.475010 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:03Z","lastTransitionTime":"2025-11-24T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:03 crc kubenswrapper[4719]: E1124 08:55:03.488661 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.492272 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.492305 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.492316 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.492330 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.492340 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:03Z","lastTransitionTime":"2025-11-24T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:03 crc kubenswrapper[4719]: E1124 08:55:03.506792 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.511087 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.511136 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.511149 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.511168 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.511182 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:03Z","lastTransitionTime":"2025-11-24T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.520837 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.520878 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.520904 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.520875 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:03 crc kubenswrapper[4719]: E1124 08:55:03.520987 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:03 crc kubenswrapper[4719]: E1124 08:55:03.521107 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:03 crc kubenswrapper[4719]: E1124 08:55:03.521266 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:03 crc kubenswrapper[4719]: E1124 08:55:03.521320 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:03 crc kubenswrapper[4719]: E1124 08:55:03.526098 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:03Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:03 crc kubenswrapper[4719]: E1124 08:55:03.526246 4719 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.527782 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.527809 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.527820 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.527834 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.527844 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:03Z","lastTransitionTime":"2025-11-24T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.630859 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.630909 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.630920 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.630941 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.630958 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:03Z","lastTransitionTime":"2025-11-24T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.734327 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.734414 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.734432 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.734457 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.734472 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:03Z","lastTransitionTime":"2025-11-24T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.837484 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.837531 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.837550 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.837569 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.837589 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:03Z","lastTransitionTime":"2025-11-24T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.940938 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.940988 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.941001 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.941018 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:03 crc kubenswrapper[4719]: I1124 08:55:03.941029 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:03Z","lastTransitionTime":"2025-11-24T08:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.044097 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.044147 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.044160 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.044178 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.044195 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:04Z","lastTransitionTime":"2025-11-24T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.147160 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.147832 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.147900 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.147967 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.148046 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:04Z","lastTransitionTime":"2025-11-24T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.251322 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.251377 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.251389 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.251409 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.251421 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:04Z","lastTransitionTime":"2025-11-24T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.354204 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.354243 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.354252 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.354265 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.354275 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:04Z","lastTransitionTime":"2025-11-24T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.456336 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.456597 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.456910 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.457160 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.457397 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:04Z","lastTransitionTime":"2025-11-24T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.534001 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.550379 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.560512 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.560568 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.560580 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.560596 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.560607 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:04Z","lastTransitionTime":"2025-11-24T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.571297 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.590653 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bfb8a0689605bb34e2409cb37e1feb999c406f1d39df1fae17d8839dd58e911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:57Z\\\",\\\"message\\\":\\\"2025-11-24T08:54:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7b03f58a-d50e-4998-b618-739de16030d1\\\\n2025-11-24T08:54:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7b03f58a-d50e-4998-b618-739de16030d1 to /host/opt/cni/bin/\\\\n2025-11-24T08:54:12Z [verbose] multus-daemon started\\\\n2025-11-24T08:54:12Z [verbose] Readiness Indicator file check\\\\n2025-11-24T08:54:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.602645 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.616873 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc33a3fc-4a67-4684-bef5-b433908724fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1964a454309c33ecb8ec0042942827fc0c84ba793fdd83b77d70294adc7abbfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.630847 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.645895 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3528880e-64fa-488f-9855-63f67e92abcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4f0b8f7a8ad7727714c244b8431083d624996f15030cea0b94f136aec1052e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21113afe48c6b1841f5739537f425c5e33b0ccc731715fda8a14323b8e5660fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98d3e414bcadf07626d331c7e3c0d8db810268042f59cbaeda09e24832e245a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.665658 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.666004 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.666124 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.666191 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.666263 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.666325 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:04Z","lastTransitionTime":"2025-11-24T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.680467 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.695894 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.711856 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.727764 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.742724 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.754223 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.768556 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.768639 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.768650 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.768669 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.768679 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:04Z","lastTransitionTime":"2025-11-24T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.779075 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:56Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:56.514988 6550 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515064 6550 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515015 6550 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515958 6550 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 08:54:56.516018 6550 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 08:54:56.516122 6550 factory.go:656] Stopping watch factory\\\\nI1124 08:54:56.516154 6550 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 08:54:56.516164 6550 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 08:54:56.526872 6550 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:56.526909 6550 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:56.526995 6550 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:56.527030 6550 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:56.527181 6550 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.794556 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.809873 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:04Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.871366 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.871786 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.871937 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.872055 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.872185 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:04Z","lastTransitionTime":"2025-11-24T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.975516 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.975582 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.975596 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.975619 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:04 crc kubenswrapper[4719]: I1124 08:55:04.975633 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:04Z","lastTransitionTime":"2025-11-24T08:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.079571 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.079634 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.079651 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.079675 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.079690 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:05Z","lastTransitionTime":"2025-11-24T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.182788 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.182831 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.182842 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.182859 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.182872 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:05Z","lastTransitionTime":"2025-11-24T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.286179 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.286242 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.286258 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.286279 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.286295 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:05Z","lastTransitionTime":"2025-11-24T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.389136 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.389180 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.389190 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.389206 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.389215 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:05Z","lastTransitionTime":"2025-11-24T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.491946 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.491990 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.492000 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.492015 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.492025 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:05Z","lastTransitionTime":"2025-11-24T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.520689 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.520772 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.520700 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.520904 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:05 crc kubenswrapper[4719]: E1124 08:55:05.520866 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:05 crc kubenswrapper[4719]: E1124 08:55:05.521008 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:05 crc kubenswrapper[4719]: E1124 08:55:05.521144 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:05 crc kubenswrapper[4719]: E1124 08:55:05.521214 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.594417 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.594495 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.594510 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.594531 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.594544 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:05Z","lastTransitionTime":"2025-11-24T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.699217 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.699254 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.699263 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.699277 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.699288 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:05Z","lastTransitionTime":"2025-11-24T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.801628 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.801680 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.801694 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.801713 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.801726 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:05Z","lastTransitionTime":"2025-11-24T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.905168 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.905223 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.905234 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.905253 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:05 crc kubenswrapper[4719]: I1124 08:55:05.905264 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:05Z","lastTransitionTime":"2025-11-24T08:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.008600 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.008648 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.008659 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.008680 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.008702 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:06Z","lastTransitionTime":"2025-11-24T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.111867 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.111940 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.111954 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.111978 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.111993 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:06Z","lastTransitionTime":"2025-11-24T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.214607 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.214652 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.214670 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.214689 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.214700 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:06Z","lastTransitionTime":"2025-11-24T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.317718 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.317763 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.317776 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.317797 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.317811 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:06Z","lastTransitionTime":"2025-11-24T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.420514 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.420563 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.420579 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.420598 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.420611 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:06Z","lastTransitionTime":"2025-11-24T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.523705 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.524158 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.524249 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.524366 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.524485 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:06Z","lastTransitionTime":"2025-11-24T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.626910 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.626963 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.626975 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.626996 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.627008 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:06Z","lastTransitionTime":"2025-11-24T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.729892 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.729940 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.729979 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.729995 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.730008 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:06Z","lastTransitionTime":"2025-11-24T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.834350 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.834413 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.834428 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.834453 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.834466 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:06Z","lastTransitionTime":"2025-11-24T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.938677 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.938705 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.938713 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.938731 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:06 crc kubenswrapper[4719]: I1124 08:55:06.938745 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:06Z","lastTransitionTime":"2025-11-24T08:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.041561 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.041601 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.041610 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.041625 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.041635 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:07Z","lastTransitionTime":"2025-11-24T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.144523 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.144565 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.144576 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.144591 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.144604 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:07Z","lastTransitionTime":"2025-11-24T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.246611 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.246653 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.246664 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.246681 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.246692 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:07Z","lastTransitionTime":"2025-11-24T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.349655 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.349703 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.349716 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.349755 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.349768 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:07Z","lastTransitionTime":"2025-11-24T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.452321 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.452370 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.452382 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.452404 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.452416 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:07Z","lastTransitionTime":"2025-11-24T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.520094 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.520139 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.520193 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.520256 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:07 crc kubenswrapper[4719]: E1124 08:55:07.520276 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:07 crc kubenswrapper[4719]: E1124 08:55:07.520366 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:07 crc kubenswrapper[4719]: E1124 08:55:07.520481 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:07 crc kubenswrapper[4719]: E1124 08:55:07.520615 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.555256 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.555299 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.555309 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.555329 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.555342 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:07Z","lastTransitionTime":"2025-11-24T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.658771 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.658812 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.658827 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.658843 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.658855 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:07Z","lastTransitionTime":"2025-11-24T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.762390 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.762438 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.762451 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.762471 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.762485 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:07Z","lastTransitionTime":"2025-11-24T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.866208 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.866289 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.866374 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.866397 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.866466 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:07Z","lastTransitionTime":"2025-11-24T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.969138 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.969180 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.969194 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.969212 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:07 crc kubenswrapper[4719]: I1124 08:55:07.969226 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:07Z","lastTransitionTime":"2025-11-24T08:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.072268 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.072303 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.072314 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.072332 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.072342 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:08Z","lastTransitionTime":"2025-11-24T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.175361 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.175415 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.175428 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.175449 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.175461 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:08Z","lastTransitionTime":"2025-11-24T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.278012 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.278112 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.278125 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.278150 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.278165 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:08Z","lastTransitionTime":"2025-11-24T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.380677 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.380730 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.380740 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.380760 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.380774 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:08Z","lastTransitionTime":"2025-11-24T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.483388 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.483442 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.483460 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.483480 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.483492 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:08Z","lastTransitionTime":"2025-11-24T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.586333 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.586395 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.586408 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.586428 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.586441 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:08Z","lastTransitionTime":"2025-11-24T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.689182 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.689220 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.689228 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.689243 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.689253 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:08Z","lastTransitionTime":"2025-11-24T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.792094 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.792169 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.792204 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.792227 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.792238 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:08Z","lastTransitionTime":"2025-11-24T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.895771 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.895850 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.895862 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.895880 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.895892 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:08Z","lastTransitionTime":"2025-11-24T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.998840 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.998879 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.998888 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.998903 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:08 crc kubenswrapper[4719]: I1124 08:55:08.998913 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:08Z","lastTransitionTime":"2025-11-24T08:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.101116 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.101431 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.101516 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.101594 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.101673 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:09Z","lastTransitionTime":"2025-11-24T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.204380 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.204426 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.204439 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.204456 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.204468 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:09Z","lastTransitionTime":"2025-11-24T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.306508 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.307107 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.307133 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.307154 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.307164 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:09Z","lastTransitionTime":"2025-11-24T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.409646 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.409681 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.409693 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.409708 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.409721 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:09Z","lastTransitionTime":"2025-11-24T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.467631 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.467895 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:56:13.46784508 +0000 UTC m=+149.799118332 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.513002 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.513066 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.513093 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.513112 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.513124 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:09Z","lastTransitionTime":"2025-11-24T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.519967 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.520063 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.519994 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.520162 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.520231 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.520365 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.520411 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.520480 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.568603 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.568680 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.568750 4719 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.568838 4719 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.568857 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:56:13.568832941 +0000 UTC m=+149.900106193 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.568901 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 08:56:13.568883362 +0000 UTC m=+149.900156694 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.615270 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.615324 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.615336 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.615352 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.615362 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:09Z","lastTransitionTime":"2025-11-24T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.670164 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.670253 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.670383 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.670425 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.670461 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.670475 4719 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.670532 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 08:56:13.670514312 +0000 UTC m=+150.001787564 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.670432 4719 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.670561 4719 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:55:09 crc kubenswrapper[4719]: E1124 08:55:09.670641 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 08:56:13.670620435 +0000 UTC m=+150.001893687 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.717988 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.718063 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.718075 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.718102 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.718118 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:09Z","lastTransitionTime":"2025-11-24T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.821500 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.821570 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.821584 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.821607 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.821631 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:09Z","lastTransitionTime":"2025-11-24T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.927421 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.927491 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.927511 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.927536 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:09 crc kubenswrapper[4719]: I1124 08:55:09.927554 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:09Z","lastTransitionTime":"2025-11-24T08:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.031162 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.031214 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.031233 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.031256 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.031270 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:10Z","lastTransitionTime":"2025-11-24T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.133749 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.134068 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.134202 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.134324 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.134397 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:10Z","lastTransitionTime":"2025-11-24T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.236936 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.237009 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.237021 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.237048 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.237060 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:10Z","lastTransitionTime":"2025-11-24T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.339252 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.339292 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.339301 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.339317 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.339329 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:10Z","lastTransitionTime":"2025-11-24T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.441742 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.441787 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.441798 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.441815 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.441829 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:10Z","lastTransitionTime":"2025-11-24T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.521363 4719 scope.go:117] "RemoveContainer" containerID="c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9" Nov 24 08:55:10 crc kubenswrapper[4719]: E1124 08:55:10.521551 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.544962 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.545066 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.545077 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.545096 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.545108 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:10Z","lastTransitionTime":"2025-11-24T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.648316 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.648616 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.648682 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.648761 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.648827 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:10Z","lastTransitionTime":"2025-11-24T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.751196 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.751254 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.751267 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.751290 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.751309 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:10Z","lastTransitionTime":"2025-11-24T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.854620 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.854685 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.854701 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.854724 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.854740 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:10Z","lastTransitionTime":"2025-11-24T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.957226 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.957273 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.957283 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.957300 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:10 crc kubenswrapper[4719]: I1124 08:55:10.957317 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:10Z","lastTransitionTime":"2025-11-24T08:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.060573 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.060632 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.060644 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.060663 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.060677 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:11Z","lastTransitionTime":"2025-11-24T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.163254 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.163287 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.163295 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.163310 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.163320 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:11Z","lastTransitionTime":"2025-11-24T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.266183 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.266241 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.266254 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.266275 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.266294 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:11Z","lastTransitionTime":"2025-11-24T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.369679 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.369736 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.369744 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.369761 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.369770 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:11Z","lastTransitionTime":"2025-11-24T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.472551 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.472593 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.472605 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.472761 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.472775 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:11Z","lastTransitionTime":"2025-11-24T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.520003 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:11 crc kubenswrapper[4719]: E1124 08:55:11.520162 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.520226 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.520253 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.520290 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:11 crc kubenswrapper[4719]: E1124 08:55:11.520369 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:11 crc kubenswrapper[4719]: E1124 08:55:11.520591 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:11 crc kubenswrapper[4719]: E1124 08:55:11.520662 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.575725 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.576129 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.576263 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.576361 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.576440 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:11Z","lastTransitionTime":"2025-11-24T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.679768 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.679822 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.679832 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.679849 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.679862 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:11Z","lastTransitionTime":"2025-11-24T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.782286 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.782641 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.782749 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.782848 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.782962 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:11Z","lastTransitionTime":"2025-11-24T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.885827 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.885890 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.885903 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.885922 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.885936 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:11Z","lastTransitionTime":"2025-11-24T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.988175 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.988513 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.988525 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.988542 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:11 crc kubenswrapper[4719]: I1124 08:55:11.988553 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:11Z","lastTransitionTime":"2025-11-24T08:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.090727 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.090804 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.090819 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.090843 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.090854 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:12Z","lastTransitionTime":"2025-11-24T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.193462 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.193495 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.193504 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.193519 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.193528 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:12Z","lastTransitionTime":"2025-11-24T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.297258 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.297316 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.297326 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.297346 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.297358 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:12Z","lastTransitionTime":"2025-11-24T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.400689 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.400745 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.400756 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.400775 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.400788 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:12Z","lastTransitionTime":"2025-11-24T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.503112 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.503178 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.503190 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.503216 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.503227 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:12Z","lastTransitionTime":"2025-11-24T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.606282 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.606344 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.606359 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.606378 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.606391 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:12Z","lastTransitionTime":"2025-11-24T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.709158 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.709204 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.709213 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.709230 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.709239 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:12Z","lastTransitionTime":"2025-11-24T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.812495 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.812544 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.812558 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.812591 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.812611 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:12Z","lastTransitionTime":"2025-11-24T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.914964 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.915076 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.915089 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.915114 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:12 crc kubenswrapper[4719]: I1124 08:55:12.915127 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:12Z","lastTransitionTime":"2025-11-24T08:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.018774 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.018840 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.018853 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.018875 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.018890 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:13Z","lastTransitionTime":"2025-11-24T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.122984 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.123070 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.123088 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.123110 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.123316 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:13Z","lastTransitionTime":"2025-11-24T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.225977 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.226019 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.226028 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.226061 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.226070 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:13Z","lastTransitionTime":"2025-11-24T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.328479 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.328526 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.328537 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.328554 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.328565 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:13Z","lastTransitionTime":"2025-11-24T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.431892 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.431941 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.431952 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.431976 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.431988 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:13Z","lastTransitionTime":"2025-11-24T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.520156 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.520213 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.520233 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.520436 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:13 crc kubenswrapper[4719]: E1124 08:55:13.520636 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:13 crc kubenswrapper[4719]: E1124 08:55:13.520749 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:13 crc kubenswrapper[4719]: E1124 08:55:13.520816 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:13 crc kubenswrapper[4719]: E1124 08:55:13.520952 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.535736 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.535813 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.535833 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.535861 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.535882 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:13Z","lastTransitionTime":"2025-11-24T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.540339 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.610083 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.610130 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.610140 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.610161 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.610174 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:13Z","lastTransitionTime":"2025-11-24T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:13 crc kubenswrapper[4719]: E1124 08:55:13.623879 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.628703 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.628748 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.628757 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.628773 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.628783 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:13Z","lastTransitionTime":"2025-11-24T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:13 crc kubenswrapper[4719]: E1124 08:55:13.642699 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.647232 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.647271 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.647282 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.647298 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.647309 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:13Z","lastTransitionTime":"2025-11-24T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:13 crc kubenswrapper[4719]: E1124 08:55:13.661866 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.666907 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.666950 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.666960 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.666975 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.666996 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:13Z","lastTransitionTime":"2025-11-24T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:13 crc kubenswrapper[4719]: E1124 08:55:13.682678 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.688011 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.688082 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.688095 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.688127 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.688138 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:13Z","lastTransitionTime":"2025-11-24T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:13 crc kubenswrapper[4719]: E1124 08:55:13.703503 4719 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T08:55:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9cbbf992-da51-4b2a-a4b6-3c8c8d85ee77\\\",\\\"systemUUID\\\":\\\"f09286b9-10a4-4ae2-b7f4-49183b71cd1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:13Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:13 crc kubenswrapper[4719]: E1124 08:55:13.703697 4719 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.705509 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.705538 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.705549 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.705565 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.705577 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:13Z","lastTransitionTime":"2025-11-24T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.808770 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.808838 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.808852 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.808868 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.808880 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:13Z","lastTransitionTime":"2025-11-24T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.914727 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.914774 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.914790 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.914806 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:13 crc kubenswrapper[4719]: I1124 08:55:13.914825 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:13Z","lastTransitionTime":"2025-11-24T08:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.016792 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.016828 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.016836 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.016852 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.016862 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:14Z","lastTransitionTime":"2025-11-24T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.118349 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.118383 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.118394 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.118408 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.118419 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:14Z","lastTransitionTime":"2025-11-24T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.220693 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.220741 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.220751 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.220766 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.220776 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:14Z","lastTransitionTime":"2025-11-24T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.322982 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.323088 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.323104 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.323125 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.323139 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:14Z","lastTransitionTime":"2025-11-24T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.425429 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.425464 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.425476 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.425493 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.425505 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:14Z","lastTransitionTime":"2025-11-24T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.528373 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.528444 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.528459 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.528485 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.528499 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:14Z","lastTransitionTime":"2025-11-24T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.536780 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2tjfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f08cb2a9-92db-4e49-b823-2dff920fb6f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b22ea9cd2b592b7e0b6c688ab46f053dc448bfe07d5e699f3f1ae39fe9d28d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2tjfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.552274 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc33a3fc-4a67-4684-bef5-b433908724fa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1964a454309c33ecb8ec0042942827fc0c84ba793fdd83b77d70294adc7abbfc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fff16fd2d707b9a70fa3dc10140ae73f6629ffbf9043ce4e7435b59328af33b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.570468 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.587794 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.605008 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0b9662b-e98a-4933-8790-0dc5dc9f27b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://886bce52c4e66ffba7641a4acaaa0357c4b05d59543e4974c8ca918f047fb8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://690dda5754b75377ffb9c8157735cd5443eb02e48fdec2924d8325ab59eac811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75add3100c7ab5e5a356750e02defbaf9dd019a9507e0dc40e18b93c23bb40d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79d496f7b951ba286b75b53c2999c64d8ba8d162557264a290d26915a9a5ec28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d85aac05c4ef15ab1a7cb0a4e098cde420a6b5638d0d23d0950dc2e6c6f852f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d99552e7faaffffe86b39dfdef914c3f35f168dc83c79cc1dbe1f99860c5326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e02e6158c5bec9fedaacbdbce101929b31ba44d45075b21e72371a6b5c625856\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsg9w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9d2g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.620976 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-v8ghd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e9122c9-57ef-4b8f-92a8-593533891255\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6bfb8a0689605bb34e2409cb37e1feb999c406f1d39df1fae17d8839dd58e911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:57Z\\\",\\\"message\\\":\\\"2025-11-24T08:54:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7b03f58a-d50e-4998-b618-739de16030d1\\\\n2025-11-24T08:54:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7b03f58a-d50e-4998-b618-739de16030d1 to /host/opt/cni/bin/\\\\n2025-11-24T08:54:12Z [verbose] multus-daemon started\\\\n2025-11-24T08:54:12Z [verbose] Readiness Indicator file check\\\\n2025-11-24T08:54:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jz9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-v8ghd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.631252 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.631343 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.631359 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.631376 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.631387 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:14Z","lastTransitionTime":"2025-11-24T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.640612 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01a4f40f582d02ce258bd3a9ce80b39dc6e56bdd179cd3fa1c57b46b6522711f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.657977 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"795b7b82-462b-4d27-8ea7-71213924683e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da44fd23f79051964e08473ae9e0cc15ae82e5101db1a2212f1fdebe0de8392c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d56148c7ba135f86f1105c72db6dfbf8a5d5b69b7d33fbb23bc1697780af3a21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84e6a8a1e8cfb99e165a504e5e644a387e988ebab1012a81fff447a4b905d00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c6f2900dcce4ae76ea546bd16d28680c69b26fe34fae4cb37a228c68eef1acc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.671566 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3528880e-64fa-488f-9855-63f67e92abcb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4f0b8f7a8ad7727714c244b8431083d624996f15030cea0b94f136aec1052e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21113afe48c6b1841f5739537f425c5e33b0ccc731715fda8a14323b8e5660fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b98d3e414bcadf07626d331c7e3c0d8db810268042f59cbaeda09e24832e245a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2df5a59bfc18cd80a0cbf70e1669f82a3daf6f10aa9e8250a0e1bc0ba212120e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.687400 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://820e13804767020d0469a97e70747f87c0ab4c65b857c83db15354302e4e623a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://750ca1fdd654a4a5e14f83f24581d31820b59c1d4ae90aeab690f617145aef27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.700638 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe015f89-bb6b-4fa1-b687-192013956ed6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://538a5d6f6c5dbcbea96e8a733d8df18fd6733d9c5df01ee0be04a0e3351faf0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smhw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hnkb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.722788 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76442e88-72e2-4a86-99b4-bd07f0490aa9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T08:54:56Z\\\",\\\"message\\\":\\\"orkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 08:54:56.514988 6550 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515064 6550 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515015 6550 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 08:54:56.515958 6550 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 08:54:56.516018 6550 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1124 08:54:56.516122 6550 factory.go:656] Stopping watch factory\\\\nI1124 08:54:56.516154 6550 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 08:54:56.516164 6550 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 08:54:56.526872 6550 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 08:54:56.526909 6550 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 08:54:56.526995 6550 ovnkube.go:599] Stopped ovnkube\\\\nI1124 08:54:56.527030 6550 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 08:54:56.527181 6550 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7xp6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvqzq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.734616 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.734651 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.734661 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.734677 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.734688 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:14Z","lastTransitionTime":"2025-11-24T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.735158 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7232e685-76c0-4605-8690-a19e65efdddf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a01cc51710b12118b611bd41f74c17efe3cd1fed0add925d334d9bab5d957f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ece7aaf674c189411389afc987269ee27d58092dd5dae99ac88638f9cddb4de3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvlxq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pvkgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.745250 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6beab7-bbb8-4abb-98b1-60c1f8360757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2k665\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5hv9d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.763816 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2119a46d-fd4e-4162-964a-944e8e5ea934\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://972026082ecb29f92af5f30a5297fb1047125336f8145895c126530b3082b4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f888642b6494a1422d2b25965671479cd88bebf84b127de8de8c726572f72c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf3bcdeb2a0044e56d06d19317443a1b069a3b0bb0eab2de6603e641130c7731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d55e9206be5fe5cb0e557096229539729966134fae4410cf72ffc8008b95fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a0d5ae150d0637979620e27dc63a48e46db3b20b0e750cbcadd5b80defca29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81203bd5dbafe3b5e5acab4f4ba0ce46d35265449b86b40a2d2f1ee24d71cf47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81203bd5dbafe3b5e5acab4f4ba0ce46d35265449b86b40a2d2f1ee24d71cf47\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7881726d77b0251100777df4c0d6f81a91925067d084a59913168dc14874c279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7881726d77b0251100777df4c0d6f81a91925067d084a59913168dc14874c279\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ac9c0b24e2ceb8d90e7fe0e2ae69c9da5b1777736919e65e3a6cef8884189786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9c0b24e2ceb8d90e7fe0e2ae69c9da5b1777736919e65e3a6cef8884189786\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.777441 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b1a2e01-5780-40b6-936d-0cb8d660edee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e34edbbc93c333eeec57d816035656726c3387de09c95cb8c6a2ee272466b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a8293b93524ebe5bce244a243b2fed48b2e70ddfd4d24005c819277fe3b5355\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49e9bc4cd6deb2d389dfca3aedd65715ca9995e67459ebcbaec3af3f58f8550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea83667d1e458f89ab364a1076a88b524f94e401f8d485671a51cc9376d7b2db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60ceaf8ad9c99eef108a2f2a3c1638aab232bc13e57f557d353ffc0eccfb9761\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 08:54:05.275696 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 08:54:05.275837 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 08:54:05.277623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4218218502/tls.crt::/tmp/serving-cert-4218218502/tls.key\\\\\\\"\\\\nI1124 08:54:05.807843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 08:54:05.812116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 08:54:05.813136 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 08:54:05.813186 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 08:54:05.813193 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 08:54:05.820131 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 08:54:05.820159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820164 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 08:54:05.820168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 08:54:05.820171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 08:54:05.820173 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 08:54:05.820176 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1124 08:54:05.820498 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1124 08:54:05.826151 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0a11e25d4e518409c366b550e9cb01f4923b79a5a807822e20bcb07b1d99c68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:53:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5e143fa2c459a38eda77c3fb58a46c3fb460e0f5c09ae7048ccf4baced243f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T08:53:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T08:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:53:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.790397 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47b4d8b6bb458a1b69975b63f0035425e88f3b7dbb793784ea85a63656109629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.802878 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.815439 4719 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hkbjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"169d1eb7-ec71-4b89-95a5-980102c3e0f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T08:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c84501eb444783809b915362c3f047c5c22b0091f791626ebaacdd30c2ad9da8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T08:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6vfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T08:54:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hkbjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T08:55:14Z is after 2025-08-24T17:21:41Z" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.837626 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.837660 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.837670 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.837685 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.837695 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:14Z","lastTransitionTime":"2025-11-24T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.940791 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.940851 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.940865 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.940889 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:14 crc kubenswrapper[4719]: I1124 08:55:14.940903 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:14Z","lastTransitionTime":"2025-11-24T08:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.043694 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.043859 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.043883 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.043901 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.043914 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:15Z","lastTransitionTime":"2025-11-24T08:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.147102 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.147164 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.147178 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.147199 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.147210 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:15Z","lastTransitionTime":"2025-11-24T08:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.250539 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.250618 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.250633 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.250658 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.250674 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:15Z","lastTransitionTime":"2025-11-24T08:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.353816 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.353943 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.353955 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.353973 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.353982 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:15Z","lastTransitionTime":"2025-11-24T08:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.457330 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.457397 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.457408 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.457427 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.457440 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:15Z","lastTransitionTime":"2025-11-24T08:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.520226 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.520344 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.520415 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:15 crc kubenswrapper[4719]: E1124 08:55:15.520428 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:15 crc kubenswrapper[4719]: E1124 08:55:15.520518 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:15 crc kubenswrapper[4719]: E1124 08:55:15.520584 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.520593 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:15 crc kubenswrapper[4719]: E1124 08:55:15.520815 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.560893 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.560939 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.560951 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.560968 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.560981 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:15Z","lastTransitionTime":"2025-11-24T08:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.663700 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.664138 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.664299 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.664419 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.664526 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:15Z","lastTransitionTime":"2025-11-24T08:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.768409 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.768463 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.768473 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.768492 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.768502 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:15Z","lastTransitionTime":"2025-11-24T08:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.870810 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.870853 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.870862 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.870878 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.870887 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:15Z","lastTransitionTime":"2025-11-24T08:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.973483 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.973552 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.973562 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.973577 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:15 crc kubenswrapper[4719]: I1124 08:55:15.973596 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:15Z","lastTransitionTime":"2025-11-24T08:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.076838 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.076898 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.076911 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.076930 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.076947 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:16Z","lastTransitionTime":"2025-11-24T08:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.179634 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.179698 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.179711 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.179733 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.179746 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:16Z","lastTransitionTime":"2025-11-24T08:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.282805 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.282874 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.282891 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.282914 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.282927 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:16Z","lastTransitionTime":"2025-11-24T08:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.386427 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.386478 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.386490 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.386507 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.386522 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:16Z","lastTransitionTime":"2025-11-24T08:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.489659 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.489701 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.489712 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.489735 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.489747 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:16Z","lastTransitionTime":"2025-11-24T08:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.592803 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.592848 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.592859 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.592875 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.592885 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:16Z","lastTransitionTime":"2025-11-24T08:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.695813 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.695858 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.695868 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.695883 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.695892 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:16Z","lastTransitionTime":"2025-11-24T08:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.799314 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.799373 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.799385 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.799404 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.799417 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:16Z","lastTransitionTime":"2025-11-24T08:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.902111 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.902160 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.902172 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.902194 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:16 crc kubenswrapper[4719]: I1124 08:55:16.902206 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:16Z","lastTransitionTime":"2025-11-24T08:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.005737 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.005790 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.005803 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.005824 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.005835 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:17Z","lastTransitionTime":"2025-11-24T08:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.109089 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.109141 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.109154 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.109175 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.109190 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:17Z","lastTransitionTime":"2025-11-24T08:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.212329 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.212379 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.212388 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.212407 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.212417 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:17Z","lastTransitionTime":"2025-11-24T08:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.315762 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.315810 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.315822 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.315841 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.315853 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:17Z","lastTransitionTime":"2025-11-24T08:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.419789 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.419875 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.419890 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.419914 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.419933 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:17Z","lastTransitionTime":"2025-11-24T08:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.520866 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.520910 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.520953 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.520910 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:17 crc kubenswrapper[4719]: E1124 08:55:17.521113 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:17 crc kubenswrapper[4719]: E1124 08:55:17.521352 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:17 crc kubenswrapper[4719]: E1124 08:55:17.521433 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:17 crc kubenswrapper[4719]: E1124 08:55:17.521712 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.524457 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.524488 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.524499 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.524517 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.524531 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:17Z","lastTransitionTime":"2025-11-24T08:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.627671 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.627720 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.627730 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.627748 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.627762 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:17Z","lastTransitionTime":"2025-11-24T08:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.731403 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.731448 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.731459 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.731479 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.731491 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:17Z","lastTransitionTime":"2025-11-24T08:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.833826 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.834106 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.834240 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.834261 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.834273 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:17Z","lastTransitionTime":"2025-11-24T08:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.936891 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.936944 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.936954 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.936972 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:17 crc kubenswrapper[4719]: I1124 08:55:17.936984 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:17Z","lastTransitionTime":"2025-11-24T08:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.039330 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.039372 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.039382 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.039398 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.039409 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:18Z","lastTransitionTime":"2025-11-24T08:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.141882 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.141923 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.141933 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.141949 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.141960 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:18Z","lastTransitionTime":"2025-11-24T08:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.244521 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.244772 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.244842 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.244872 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.244890 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:18Z","lastTransitionTime":"2025-11-24T08:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.348450 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.348505 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.348522 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.348541 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.348556 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:18Z","lastTransitionTime":"2025-11-24T08:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.451468 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.451530 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.451543 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.451562 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.451585 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:18Z","lastTransitionTime":"2025-11-24T08:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.553911 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.553963 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.553973 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.553991 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.554003 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:18Z","lastTransitionTime":"2025-11-24T08:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.656397 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.656439 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.656448 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.656461 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.656471 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:18Z","lastTransitionTime":"2025-11-24T08:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.759749 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.759829 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.759841 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.759859 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.759873 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:18Z","lastTransitionTime":"2025-11-24T08:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.862813 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.862865 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.862877 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.862896 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.862909 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:18Z","lastTransitionTime":"2025-11-24T08:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.966148 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.966212 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.966223 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.966246 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:18 crc kubenswrapper[4719]: I1124 08:55:18.966405 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:18Z","lastTransitionTime":"2025-11-24T08:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.068993 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.069059 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.069072 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.069089 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.069101 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:19Z","lastTransitionTime":"2025-11-24T08:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.171665 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.171729 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.171743 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.171763 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.171779 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:19Z","lastTransitionTime":"2025-11-24T08:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.274141 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.274200 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.274219 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.274239 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.274251 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:19Z","lastTransitionTime":"2025-11-24T08:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.376775 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.376846 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.376857 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.376879 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.376891 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:19Z","lastTransitionTime":"2025-11-24T08:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.478909 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.478956 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.478966 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.478980 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.478989 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:19Z","lastTransitionTime":"2025-11-24T08:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.519832 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.519885 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.519964 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:19 crc kubenswrapper[4719]: E1124 08:55:19.520129 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.519980 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:19 crc kubenswrapper[4719]: E1124 08:55:19.520256 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:19 crc kubenswrapper[4719]: E1124 08:55:19.520370 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:19 crc kubenswrapper[4719]: E1124 08:55:19.520427 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.581282 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.581997 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.582097 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.582121 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.582133 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:19Z","lastTransitionTime":"2025-11-24T08:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.685002 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.685068 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.685109 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.685130 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.685142 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:19Z","lastTransitionTime":"2025-11-24T08:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.788932 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.788990 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.789001 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.789023 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.789098 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:19Z","lastTransitionTime":"2025-11-24T08:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.892402 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.892494 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.892507 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.892531 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.892546 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:19Z","lastTransitionTime":"2025-11-24T08:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.996486 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.996516 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.996527 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.996544 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:19 crc kubenswrapper[4719]: I1124 08:55:19.996556 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:19Z","lastTransitionTime":"2025-11-24T08:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.099516 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.099585 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.099599 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.099627 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.099647 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:20Z","lastTransitionTime":"2025-11-24T08:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.204211 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.204284 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.204298 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.204331 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.204354 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:20Z","lastTransitionTime":"2025-11-24T08:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.307111 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.307170 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.307181 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.307197 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.307207 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:20Z","lastTransitionTime":"2025-11-24T08:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.410230 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.410281 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.410291 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.410306 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.410315 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:20Z","lastTransitionTime":"2025-11-24T08:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.514361 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.514419 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.514432 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.514452 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.514472 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:20Z","lastTransitionTime":"2025-11-24T08:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.618086 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.618134 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.618147 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.618168 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.618180 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:20Z","lastTransitionTime":"2025-11-24T08:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.721075 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.721104 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.721112 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.721124 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.721133 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:20Z","lastTransitionTime":"2025-11-24T08:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.823722 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.823762 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.823773 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.823790 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.823818 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:20Z","lastTransitionTime":"2025-11-24T08:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.927638 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.927674 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.927684 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.927700 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:20 crc kubenswrapper[4719]: I1124 08:55:20.927711 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:20Z","lastTransitionTime":"2025-11-24T08:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.030822 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.030864 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.030876 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.030893 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.030904 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:21Z","lastTransitionTime":"2025-11-24T08:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.134651 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.134721 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.134734 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.134772 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.134783 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:21Z","lastTransitionTime":"2025-11-24T08:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.237465 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.237509 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.237518 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.237534 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.237544 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:21Z","lastTransitionTime":"2025-11-24T08:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.340114 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.340152 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.340163 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.340178 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.340190 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:21Z","lastTransitionTime":"2025-11-24T08:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.443173 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.443229 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.443251 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.443273 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.443288 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:21Z","lastTransitionTime":"2025-11-24T08:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.520505 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.520642 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.520698 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.520706 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:21 crc kubenswrapper[4719]: E1124 08:55:21.520791 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:21 crc kubenswrapper[4719]: E1124 08:55:21.520922 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:21 crc kubenswrapper[4719]: E1124 08:55:21.521063 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:21 crc kubenswrapper[4719]: E1124 08:55:21.521357 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.545552 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.545604 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.545616 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.545634 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.545647 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:21Z","lastTransitionTime":"2025-11-24T08:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.647721 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.647771 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.647782 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.647804 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.647815 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:21Z","lastTransitionTime":"2025-11-24T08:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.750739 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.750790 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.750800 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.750818 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.750830 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:21Z","lastTransitionTime":"2025-11-24T08:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.854651 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.854734 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.854756 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.854785 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.854808 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:21Z","lastTransitionTime":"2025-11-24T08:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.957122 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.957179 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.957192 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.957205 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:21 crc kubenswrapper[4719]: I1124 08:55:21.957215 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:21Z","lastTransitionTime":"2025-11-24T08:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.059595 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.059635 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.059645 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.059662 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.059672 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:22Z","lastTransitionTime":"2025-11-24T08:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.162062 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.162108 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.162119 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.162139 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.162151 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:22Z","lastTransitionTime":"2025-11-24T08:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.265867 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.265917 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.265929 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.265948 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.265960 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:22Z","lastTransitionTime":"2025-11-24T08:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.368987 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.369023 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.369049 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.369065 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.369077 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:22Z","lastTransitionTime":"2025-11-24T08:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.472167 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.472235 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.472247 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.472266 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.472279 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:22Z","lastTransitionTime":"2025-11-24T08:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.521007 4719 scope.go:117] "RemoveContainer" containerID="c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9" Nov 24 08:55:22 crc kubenswrapper[4719]: E1124 08:55:22.521244 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvqzq_openshift-ovn-kubernetes(76442e88-72e2-4a86-99b4-bd07f0490aa9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.574570 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.574624 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.574633 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.574648 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.574658 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:22Z","lastTransitionTime":"2025-11-24T08:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.676786 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.676827 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.676842 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.676856 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.676866 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:22Z","lastTransitionTime":"2025-11-24T08:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.781254 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.781297 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.781307 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.781323 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.781334 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:22Z","lastTransitionTime":"2025-11-24T08:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.884455 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.884497 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.884506 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.884522 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.884532 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:22Z","lastTransitionTime":"2025-11-24T08:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.987496 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.987530 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.987540 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.987554 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:22 crc kubenswrapper[4719]: I1124 08:55:22.987563 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:22Z","lastTransitionTime":"2025-11-24T08:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.090488 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.090531 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.090543 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.090559 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.090571 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:23Z","lastTransitionTime":"2025-11-24T08:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.192870 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.193134 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.193252 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.193335 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.193401 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:23Z","lastTransitionTime":"2025-11-24T08:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.296302 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.296357 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.296370 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.296386 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.296396 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:23Z","lastTransitionTime":"2025-11-24T08:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.399397 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.399458 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.399470 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.399488 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.399501 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:23Z","lastTransitionTime":"2025-11-24T08:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.502377 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.502412 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.502421 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.502437 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.502446 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:23Z","lastTransitionTime":"2025-11-24T08:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.520794 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.520867 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.520883 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.520835 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:23 crc kubenswrapper[4719]: E1124 08:55:23.521002 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:23 crc kubenswrapper[4719]: E1124 08:55:23.521180 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:23 crc kubenswrapper[4719]: E1124 08:55:23.521256 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:23 crc kubenswrapper[4719]: E1124 08:55:23.521101 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.605154 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.605203 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.605215 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.605234 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.605246 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:23Z","lastTransitionTime":"2025-11-24T08:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.708167 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.708212 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.708223 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.708239 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.708250 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:23Z","lastTransitionTime":"2025-11-24T08:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.799797 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.799843 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.799854 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.799869 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.799877 4719 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T08:55:23Z","lastTransitionTime":"2025-11-24T08:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.847030 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq"] Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.847511 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.850230 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.850283 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.850516 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.850521 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.867484 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.867463845 podStartE2EDuration="1m12.867463845s" podCreationTimestamp="2025-11-24 08:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:23.867394673 +0000 UTC m=+100.198667935" watchObservedRunningTime="2025-11-24 08:55:23.867463845 +0000 UTC m=+100.198737097" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.882151 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.882134682 podStartE2EDuration="43.882134682s" podCreationTimestamp="2025-11-24 08:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:23.88206626 +0000 UTC m=+100.213339532" watchObservedRunningTime="2025-11-24 08:55:23.882134682 +0000 UTC m=+100.213407954" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.907151 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podStartSLOduration=78.907130373 podStartE2EDuration="1m18.907130373s" podCreationTimestamp="2025-11-24 08:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:23.905948519 +0000 UTC m=+100.237221771" watchObservedRunningTime="2025-11-24 08:55:23.907130373 +0000 UTC m=+100.238403625" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.928974 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9b77d469-539c-49f5-9770-3e0239afc384-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.929018 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9b77d469-539c-49f5-9770-3e0239afc384-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.929055 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b77d469-539c-49f5-9770-3e0239afc384-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.929176 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9b77d469-539c-49f5-9770-3e0239afc384-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.929279 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b77d469-539c-49f5-9770-3e0239afc384-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:23 crc kubenswrapper[4719]: I1124 08:55:23.947463 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pvkgt" podStartSLOduration=77.947444129 podStartE2EDuration="1m17.947444129s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:23.947008346 +0000 UTC m=+100.278281618" watchObservedRunningTime="2025-11-24 08:55:23.947444129 +0000 UTC m=+100.278717401" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.024991 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=11.024969803 podStartE2EDuration="11.024969803s" podCreationTimestamp="2025-11-24 08:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:23.997923224 +0000 UTC m=+100.329196476" watchObservedRunningTime="2025-11-24 08:55:24.024969803 +0000 UTC m=+100.356243055" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.025246 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.02524134 podStartE2EDuration="1m18.02524134s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:24.024760657 +0000 UTC m=+100.356033929" watchObservedRunningTime="2025-11-24 08:55:24.02524134 +0000 UTC m=+100.356514593" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.029936 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9b77d469-539c-49f5-9770-3e0239afc384-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.030280 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b77d469-539c-49f5-9770-3e0239afc384-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.030419 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9b77d469-539c-49f5-9770-3e0239afc384-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.030551 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9b77d469-539c-49f5-9770-3e0239afc384-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.030641 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b77d469-539c-49f5-9770-3e0239afc384-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.030664 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9b77d469-539c-49f5-9770-3e0239afc384-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.030158 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9b77d469-539c-49f5-9770-3e0239afc384-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.031702 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9b77d469-539c-49f5-9770-3e0239afc384-service-ca\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.047094 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b77d469-539c-49f5-9770-3e0239afc384-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.054082 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b77d469-539c-49f5-9770-3e0239afc384-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-fkvsq\" (UID: \"9b77d469-539c-49f5-9770-3e0239afc384\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.084203 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hkbjt" podStartSLOduration=79.084172086 podStartE2EDuration="1m19.084172086s" podCreationTimestamp="2025-11-24 08:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:24.083712393 +0000 UTC m=+100.414985655" watchObservedRunningTime="2025-11-24 08:55:24.084172086 +0000 UTC m=+100.415445358" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.110381 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-2tjfc" podStartSLOduration=78.11035168 podStartE2EDuration="1m18.11035168s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:24.109321081 +0000 UTC m=+100.440594333" watchObservedRunningTime="2025-11-24 08:55:24.11035168 +0000 UTC m=+100.441624932" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.123799 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=34.123699919 podStartE2EDuration="34.123699919s" podCreationTimestamp="2025-11-24 08:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:24.122537086 +0000 UTC m=+100.453810348" watchObservedRunningTime="2025-11-24 08:55:24.123699919 +0000 UTC m=+100.454973171" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.161592 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.218606 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-v8ghd" podStartSLOduration=79.218575757 podStartE2EDuration="1m19.218575757s" podCreationTimestamp="2025-11-24 08:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:24.217801205 +0000 UTC m=+100.549074477" watchObservedRunningTime="2025-11-24 08:55:24.218575757 +0000 UTC m=+100.549849009" Nov 24 08:55:24 crc kubenswrapper[4719]: I1124 08:55:24.219027 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9d2g8" podStartSLOduration=79.219021549 podStartE2EDuration="1m19.219021549s" podCreationTimestamp="2025-11-24 08:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:24.197412705 +0000 UTC m=+100.528685977" watchObservedRunningTime="2025-11-24 08:55:24.219021549 +0000 UTC m=+100.550294801" Nov 24 08:55:25 crc kubenswrapper[4719]: I1124 08:55:25.159464 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" event={"ID":"9b77d469-539c-49f5-9770-3e0239afc384","Type":"ContainerStarted","Data":"5674e6d3fc076fdcad310625b66028b9a17d9017b6c74c205f31fd44980cb9ef"} Nov 24 08:55:25 crc kubenswrapper[4719]: I1124 08:55:25.159523 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" event={"ID":"9b77d469-539c-49f5-9770-3e0239afc384","Type":"ContainerStarted","Data":"0bc316c3c54661f65696b923d4b4f62e56ae42d1dd98af8d94d3249ae6db8dbf"} Nov 24 08:55:25 crc kubenswrapper[4719]: I1124 08:55:25.520240 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:25 crc kubenswrapper[4719]: I1124 08:55:25.520240 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:25 crc kubenswrapper[4719]: I1124 08:55:25.520240 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:25 crc kubenswrapper[4719]: I1124 08:55:25.520325 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:25 crc kubenswrapper[4719]: E1124 08:55:25.520489 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:25 crc kubenswrapper[4719]: E1124 08:55:25.520577 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:25 crc kubenswrapper[4719]: E1124 08:55:25.520628 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:25 crc kubenswrapper[4719]: E1124 08:55:25.520679 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:25 crc kubenswrapper[4719]: I1124 08:55:25.646502 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs\") pod \"network-metrics-daemon-5hv9d\" (UID: \"bd6beab7-bbb8-4abb-98b1-60c1f8360757\") " pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:25 crc kubenswrapper[4719]: E1124 08:55:25.646645 4719 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:55:25 crc kubenswrapper[4719]: E1124 08:55:25.646707 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs podName:bd6beab7-bbb8-4abb-98b1-60c1f8360757 nodeName:}" failed. No retries permitted until 2025-11-24 08:56:29.646689387 +0000 UTC m=+165.977962639 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs") pod "network-metrics-daemon-5hv9d" (UID: "bd6beab7-bbb8-4abb-98b1-60c1f8360757") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 08:55:27 crc kubenswrapper[4719]: I1124 08:55:27.519726 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:27 crc kubenswrapper[4719]: I1124 08:55:27.519768 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:27 crc kubenswrapper[4719]: I1124 08:55:27.519853 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:27 crc kubenswrapper[4719]: I1124 08:55:27.519740 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:27 crc kubenswrapper[4719]: E1124 08:55:27.519865 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:27 crc kubenswrapper[4719]: E1124 08:55:27.519954 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:27 crc kubenswrapper[4719]: E1124 08:55:27.520129 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:27 crc kubenswrapper[4719]: E1124 08:55:27.520188 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:29 crc kubenswrapper[4719]: I1124 08:55:29.520229 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:29 crc kubenswrapper[4719]: I1124 08:55:29.520327 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:29 crc kubenswrapper[4719]: I1124 08:55:29.520441 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:29 crc kubenswrapper[4719]: E1124 08:55:29.520440 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:29 crc kubenswrapper[4719]: I1124 08:55:29.520537 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:29 crc kubenswrapper[4719]: E1124 08:55:29.520728 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:29 crc kubenswrapper[4719]: E1124 08:55:29.520816 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:29 crc kubenswrapper[4719]: E1124 08:55:29.520898 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:31 crc kubenswrapper[4719]: I1124 08:55:31.520144 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:31 crc kubenswrapper[4719]: I1124 08:55:31.520219 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:31 crc kubenswrapper[4719]: I1124 08:55:31.520228 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:31 crc kubenswrapper[4719]: I1124 08:55:31.520643 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:31 crc kubenswrapper[4719]: E1124 08:55:31.521086 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:31 crc kubenswrapper[4719]: E1124 08:55:31.521146 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:31 crc kubenswrapper[4719]: E1124 08:55:31.521099 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:31 crc kubenswrapper[4719]: E1124 08:55:31.521272 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:33 crc kubenswrapper[4719]: I1124 08:55:33.520683 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:33 crc kubenswrapper[4719]: I1124 08:55:33.520787 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:33 crc kubenswrapper[4719]: E1124 08:55:33.521358 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:33 crc kubenswrapper[4719]: I1124 08:55:33.520937 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:33 crc kubenswrapper[4719]: I1124 08:55:33.520816 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:33 crc kubenswrapper[4719]: E1124 08:55:33.521541 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:33 crc kubenswrapper[4719]: E1124 08:55:33.521650 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:33 crc kubenswrapper[4719]: E1124 08:55:33.521712 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:35 crc kubenswrapper[4719]: I1124 08:55:35.520597 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:35 crc kubenswrapper[4719]: E1124 08:55:35.521070 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:35 crc kubenswrapper[4719]: I1124 08:55:35.520819 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:35 crc kubenswrapper[4719]: E1124 08:55:35.521148 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:35 crc kubenswrapper[4719]: I1124 08:55:35.520840 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:35 crc kubenswrapper[4719]: E1124 08:55:35.521221 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:35 crc kubenswrapper[4719]: I1124 08:55:35.520776 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:35 crc kubenswrapper[4719]: E1124 08:55:35.521277 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:37 crc kubenswrapper[4719]: I1124 08:55:37.520296 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:37 crc kubenswrapper[4719]: I1124 08:55:37.520417 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:37 crc kubenswrapper[4719]: I1124 08:55:37.520305 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:37 crc kubenswrapper[4719]: I1124 08:55:37.520305 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:37 crc kubenswrapper[4719]: E1124 08:55:37.520465 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:37 crc kubenswrapper[4719]: E1124 08:55:37.520547 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:37 crc kubenswrapper[4719]: E1124 08:55:37.520616 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:37 crc kubenswrapper[4719]: E1124 08:55:37.520683 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:37 crc kubenswrapper[4719]: I1124 08:55:37.521402 4719 scope.go:117] "RemoveContainer" containerID="c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9" Nov 24 08:55:38 crc kubenswrapper[4719]: I1124 08:55:38.204600 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/3.log" Nov 24 08:55:38 crc kubenswrapper[4719]: I1124 08:55:38.207697 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerStarted","Data":"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4"} Nov 24 08:55:38 crc kubenswrapper[4719]: I1124 08:55:38.208066 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:55:38 crc kubenswrapper[4719]: I1124 08:55:38.243190 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-fkvsq" podStartSLOduration=93.243168662 podStartE2EDuration="1m33.243168662s" podCreationTimestamp="2025-11-24 08:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:25.174084211 +0000 UTC m=+101.505357473" watchObservedRunningTime="2025-11-24 08:55:38.243168662 +0000 UTC m=+114.574441914" Nov 24 08:55:38 crc kubenswrapper[4719]: I1124 08:55:38.243525 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podStartSLOduration=93.243517652 podStartE2EDuration="1m33.243517652s" podCreationTimestamp="2025-11-24 08:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:38.24098894 +0000 UTC m=+114.572262192" watchObservedRunningTime="2025-11-24 08:55:38.243517652 +0000 UTC m=+114.574790904" Nov 24 08:55:38 crc kubenswrapper[4719]: I1124 08:55:38.728658 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5hv9d"] Nov 24 08:55:38 crc kubenswrapper[4719]: I1124 08:55:38.728802 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:38 crc kubenswrapper[4719]: E1124 08:55:38.729053 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:39 crc kubenswrapper[4719]: I1124 08:55:39.519980 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:39 crc kubenswrapper[4719]: E1124 08:55:39.520526 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 08:55:39 crc kubenswrapper[4719]: I1124 08:55:39.520146 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:39 crc kubenswrapper[4719]: E1124 08:55:39.520618 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 08:55:39 crc kubenswrapper[4719]: I1124 08:55:39.520071 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:39 crc kubenswrapper[4719]: E1124 08:55:39.520672 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.520792 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:40 crc kubenswrapper[4719]: E1124 08:55:40.520947 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5hv9d" podUID="bd6beab7-bbb8-4abb-98b1-60c1f8360757" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.903463 4719 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.903654 4719 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.946339 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jkf8p"] Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.946950 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.951541 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q"] Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.952364 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.953467 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hcbkk"] Nov 24 08:55:40 crc kubenswrapper[4719]: W1124 08:55:40.953779 4719 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.953884 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:40 crc kubenswrapper[4719]: E1124 08:55:40.953894 4719 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:55:40 crc kubenswrapper[4719]: W1124 08:55:40.954095 4719 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Nov 24 08:55:40 crc kubenswrapper[4719]: E1124 08:55:40.954341 4719 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:55:40 crc kubenswrapper[4719]: W1124 08:55:40.954463 4719 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Nov 24 08:55:40 crc kubenswrapper[4719]: E1124 08:55:40.954543 4719 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:55:40 crc kubenswrapper[4719]: W1124 08:55:40.954669 4719 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Nov 24 08:55:40 crc kubenswrapper[4719]: E1124 08:55:40.954750 4719 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.956079 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fr4v7"] Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.956851 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.957500 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh"] Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.958240 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.958742 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-52tkz"] Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.959436 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.959987 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg"] Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.960525 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.964104 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.964719 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.964994 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.965354 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.968736 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-l4lt5"] Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.971298 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.973483 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.973813 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.974105 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.974614 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.974759 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.975315 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.977732 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4"] Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.978345 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.987088 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.987334 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.987675 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.987822 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.987907 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.987921 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.988012 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.988086 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.988120 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.988198 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.988238 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.988311 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.988200 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.988461 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 24 08:55:40 crc kubenswrapper[4719]: I1124 08:55:40.988635 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.001579 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.001672 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.001947 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.002283 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.002447 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.004219 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bzb4s"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.004806 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bzb4s" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.010461 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.011051 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.011077 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.014458 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.014845 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.015320 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.015612 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.015826 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.016025 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.016954 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.017114 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.017145 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.017177 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.017120 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.017278 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.017289 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.017335 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.017373 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.017427 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.017979 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.018475 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.019318 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.023502 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.023766 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.038913 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.049091 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.049932 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.050350 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.050761 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.051141 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4qkwc"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.052259 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.052634 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.056439 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.065206 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.067801 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.068313 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6qn99"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.068494 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.068404 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.070318 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g48p5"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.070583 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.070885 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.068455 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4qkwc" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.071270 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.071790 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.073597 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.076215 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mn2gk"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.076839 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-4887s"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.076901 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.077824 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.079551 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j26j4"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.080123 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.081177 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pfhtd"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.088090 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.088558 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.089179 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.089426 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.089732 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.090005 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.090304 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.090658 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.090959 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.091261 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.091496 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.092567 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.092754 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.092862 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.094753 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.095479 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.095801 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k74l4"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.096070 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.096365 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfhtd" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.097445 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.098934 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.099898 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.100632 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.101231 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.101288 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.101398 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.101512 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.101535 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.101740 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.101840 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.102014 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.102305 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.101246 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.102588 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.101407 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.101803 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.100801 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.105886 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.106967 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.107361 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.107650 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jpl9f"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.107884 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jkf8p"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.107897 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.121598 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.121771 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jpl9f" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.121795 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.122714 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.123612 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.124064 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.127922 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.138793 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.138925 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/15b89539-dfb7-4d1b-9300-e04517c96486-available-featuregates\") pod \"openshift-config-operator-7777fb866f-52tkz\" (UID: \"15b89539-dfb7-4d1b-9300-e04517c96486\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.138992 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6fd95d6b-226e-4eef-a232-85205a89d877-etcd-client\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139022 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fd95d6b-226e-4eef-a232-85205a89d877-audit-dir\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139060 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pprs7\" (UniqueName: \"kubernetes.io/projected/f181c2b3-1876-4446-b16e-fbbaba6f7c95-kube-api-access-pprs7\") pod \"downloads-7954f5f757-bzb4s\" (UID: \"f181c2b3-1876-4446-b16e-fbbaba6f7c95\") " pod="openshift-console/downloads-7954f5f757-bzb4s" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139082 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaeaea26-9884-4565-ade3-4fdbaba94cc6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7k5s4\" (UID: \"eaeaea26-9884-4565-ade3-4fdbaba94cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139144 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6fd95d6b-226e-4eef-a232-85205a89d877-node-pullsecrets\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139179 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdf07083-6f82-49a7-9af9-b2d7aec76240-client-ca\") pod \"route-controller-manager-6576b87f9c-d5d8q\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139225 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g7q7\" (UniqueName: \"kubernetes.io/projected/5c449dd1-4e36-4f64-8d34-ec281a84f870-kube-api-access-2g7q7\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139268 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b89539-dfb7-4d1b-9300-e04517c96486-serving-cert\") pod \"openshift-config-operator-7777fb866f-52tkz\" (UID: \"15b89539-dfb7-4d1b-9300-e04517c96486\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139303 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-config\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139323 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/613468f4-6a02-4828-8873-01bccb4b2c43-images\") pod \"machine-api-operator-5694c8668f-jkf8p\" (UID: \"613468f4-6a02-4828-8873-01bccb4b2c43\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139350 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcwhp\" (UniqueName: \"kubernetes.io/projected/6fd95d6b-226e-4eef-a232-85205a89d877-kube-api-access-qcwhp\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139381 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-etcd-serving-ca\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139412 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-service-ca\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139435 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc-auth-proxy-config\") pod \"machine-approver-56656f9798-v8zzg\" (UID: \"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139462 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd95d6b-226e-4eef-a232-85205a89d877-serving-cert\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139488 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139511 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaeaea26-9884-4565-ade3-4fdbaba94cc6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7k5s4\" (UID: \"eaeaea26-9884-4565-ade3-4fdbaba94cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139535 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c449dd1-4e36-4f64-8d34-ec281a84f870-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139559 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c449dd1-4e36-4f64-8d34-ec281a84f870-audit-dir\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139586 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-client-ca\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139619 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139646 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-audit\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139677 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c449dd1-4e36-4f64-8d34-ec281a84f870-encryption-config\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139700 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c449dd1-4e36-4f64-8d34-ec281a84f870-etcd-client\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139731 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2j6k\" (UniqueName: \"kubernetes.io/projected/bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc-kube-api-access-n2j6k\") pod \"machine-approver-56656f9798-v8zzg\" (UID: \"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139761 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-trusted-ca-bundle\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139785 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613468f4-6a02-4828-8873-01bccb4b2c43-config\") pod \"machine-api-operator-5694c8668f-jkf8p\" (UID: \"613468f4-6a02-4828-8873-01bccb4b2c43\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139826 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5f6z\" (UniqueName: \"kubernetes.io/projected/cdf07083-6f82-49a7-9af9-b2d7aec76240-kube-api-access-m5f6z\") pod \"route-controller-manager-6576b87f9c-d5d8q\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139857 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c449dd1-4e36-4f64-8d34-ec281a84f870-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139906 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tgkc\" (UniqueName: \"kubernetes.io/projected/15b89539-dfb7-4d1b-9300-e04517c96486-kube-api-access-9tgkc\") pod \"openshift-config-operator-7777fb866f-52tkz\" (UID: \"15b89539-dfb7-4d1b-9300-e04517c96486\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.139970 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6fd95d6b-226e-4eef-a232-85205a89d877-encryption-config\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140010 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/613468f4-6a02-4828-8873-01bccb4b2c43-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jkf8p\" (UID: \"613468f4-6a02-4828-8873-01bccb4b2c43\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140056 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0437d205-eb04-4136-a158-01d8729c335c-console-serving-cert\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140088 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-image-import-ca\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140115 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d92wc\" (UniqueName: \"kubernetes.io/projected/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-kube-api-access-d92wc\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140144 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf07083-6f82-49a7-9af9-b2d7aec76240-config\") pod \"route-controller-manager-6576b87f9c-d5d8q\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140175 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-config\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140207 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdf07083-6f82-49a7-9af9-b2d7aec76240-serving-cert\") pod \"route-controller-manager-6576b87f9c-d5d8q\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140246 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/be010eff-2ece-4d07-98e1-6c7d593d89b1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-x4djh\" (UID: \"be010eff-2ece-4d07-98e1-6c7d593d89b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140290 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j4n9\" (UniqueName: \"kubernetes.io/projected/be010eff-2ece-4d07-98e1-6c7d593d89b1-kube-api-access-5j4n9\") pod \"cluster-samples-operator-665b6dd947-x4djh\" (UID: \"be010eff-2ece-4d07-98e1-6c7d593d89b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140315 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxkm7\" (UniqueName: \"kubernetes.io/projected/eaeaea26-9884-4565-ade3-4fdbaba94cc6-kube-api-access-bxkm7\") pod \"openshift-controller-manager-operator-756b6f6bc6-7k5s4\" (UID: \"eaeaea26-9884-4565-ade3-4fdbaba94cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140346 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0437d205-eb04-4136-a158-01d8729c335c-console-oauth-config\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140389 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc-config\") pod \"machine-approver-56656f9798-v8zzg\" (UID: \"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140435 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc-machine-approver-tls\") pod \"machine-approver-56656f9798-v8zzg\" (UID: \"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140458 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh5j2\" (UniqueName: \"kubernetes.io/projected/613468f4-6a02-4828-8873-01bccb4b2c43-kube-api-access-jh5j2\") pod \"machine-api-operator-5694c8668f-jkf8p\" (UID: \"613468f4-6a02-4828-8873-01bccb4b2c43\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140476 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-console-config\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140581 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140640 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-serving-cert\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140678 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-oauth-serving-cert\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140711 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c449dd1-4e36-4f64-8d34-ec281a84f870-audit-policies\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140739 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c449dd1-4e36-4f64-8d34-ec281a84f870-serving-cert\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.140769 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6q5z\" (UniqueName: \"kubernetes.io/projected/0437d205-eb04-4136-a158-01d8729c335c-kube-api-access-c6q5z\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.141981 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.144997 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.150057 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.157211 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.165210 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.169071 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qhkxq"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.169903 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.170702 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.169902 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.171798 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.172063 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qhkxq" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.172202 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.175190 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-52tkz"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.175722 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.176078 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.179885 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fr4v7"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.179947 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.182483 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.190656 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ftc62"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.190763 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.192177 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftc62" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.192549 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.193760 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.196759 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.196916 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hsdhb"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.198058 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hsdhb" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.200566 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gtqd7"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.206932 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.209917 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.210819 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.211304 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.213741 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.223225 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.225602 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.226504 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.234134 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mn2gk"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.234957 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bzb4s"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.238701 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245015 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j4n9\" (UniqueName: \"kubernetes.io/projected/be010eff-2ece-4d07-98e1-6c7d593d89b1-kube-api-access-5j4n9\") pod \"cluster-samples-operator-665b6dd947-x4djh\" (UID: \"be010eff-2ece-4d07-98e1-6c7d593d89b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245087 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxkm7\" (UniqueName: \"kubernetes.io/projected/eaeaea26-9884-4565-ade3-4fdbaba94cc6-kube-api-access-bxkm7\") pod \"openshift-controller-manager-operator-756b6f6bc6-7k5s4\" (UID: \"eaeaea26-9884-4565-ade3-4fdbaba94cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245113 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0437d205-eb04-4136-a158-01d8729c335c-console-oauth-config\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245142 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc-config\") pod \"machine-approver-56656f9798-v8zzg\" (UID: \"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245168 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc-machine-approver-tls\") pod \"machine-approver-56656f9798-v8zzg\" (UID: \"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245188 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh5j2\" (UniqueName: \"kubernetes.io/projected/613468f4-6a02-4828-8873-01bccb4b2c43-kube-api-access-jh5j2\") pod \"machine-api-operator-5694c8668f-jkf8p\" (UID: \"613468f4-6a02-4828-8873-01bccb4b2c43\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245208 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-console-config\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245227 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-serving-cert\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245243 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-oauth-serving-cert\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245260 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c449dd1-4e36-4f64-8d34-ec281a84f870-audit-policies\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245309 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c449dd1-4e36-4f64-8d34-ec281a84f870-serving-cert\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245332 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce13f2cf-2ff9-4178-a689-14514c8b0b37-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2cmg5\" (UID: \"ce13f2cf-2ff9-4178-a689-14514c8b0b37\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245351 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6q5z\" (UniqueName: \"kubernetes.io/projected/0437d205-eb04-4136-a158-01d8729c335c-kube-api-access-c6q5z\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245388 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce13f2cf-2ff9-4178-a689-14514c8b0b37-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2cmg5\" (UID: \"ce13f2cf-2ff9-4178-a689-14514c8b0b37\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245435 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85d596cf-88d9-4858-95a2-cfcae776651c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-d8r69\" (UID: \"85d596cf-88d9-4858-95a2-cfcae776651c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245454 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a0616cf-bdcc-463d-8185-fd49b74cd419-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7zwxh\" (UID: \"9a0616cf-bdcc-463d-8185-fd49b74cd419\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245473 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/15b89539-dfb7-4d1b-9300-e04517c96486-available-featuregates\") pod \"openshift-config-operator-7777fb866f-52tkz\" (UID: \"15b89539-dfb7-4d1b-9300-e04517c96486\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245491 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6fd95d6b-226e-4eef-a232-85205a89d877-etcd-client\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245506 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fd95d6b-226e-4eef-a232-85205a89d877-audit-dir\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245525 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pprs7\" (UniqueName: \"kubernetes.io/projected/f181c2b3-1876-4446-b16e-fbbaba6f7c95-kube-api-access-pprs7\") pod \"downloads-7954f5f757-bzb4s\" (UID: \"f181c2b3-1876-4446-b16e-fbbaba6f7c95\") " pod="openshift-console/downloads-7954f5f757-bzb4s" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245542 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaeaea26-9884-4565-ade3-4fdbaba94cc6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7k5s4\" (UID: \"eaeaea26-9884-4565-ade3-4fdbaba94cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245560 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdf07083-6f82-49a7-9af9-b2d7aec76240-client-ca\") pod \"route-controller-manager-6576b87f9c-d5d8q\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245577 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g7q7\" (UniqueName: \"kubernetes.io/projected/5c449dd1-4e36-4f64-8d34-ec281a84f870-kube-api-access-2g7q7\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245593 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6fd95d6b-226e-4eef-a232-85205a89d877-node-pullsecrets\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245609 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b89539-dfb7-4d1b-9300-e04517c96486-serving-cert\") pod \"openshift-config-operator-7777fb866f-52tkz\" (UID: \"15b89539-dfb7-4d1b-9300-e04517c96486\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245627 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a0616cf-bdcc-463d-8185-fd49b74cd419-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7zwxh\" (UID: \"9a0616cf-bdcc-463d-8185-fd49b74cd419\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245643 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-config\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245661 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/613468f4-6a02-4828-8873-01bccb4b2c43-images\") pod \"machine-api-operator-5694c8668f-jkf8p\" (UID: \"613468f4-6a02-4828-8873-01bccb4b2c43\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245680 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnd5f\" (UniqueName: \"kubernetes.io/projected/ce13f2cf-2ff9-4178-a689-14514c8b0b37-kube-api-access-dnd5f\") pod \"openshift-apiserver-operator-796bbdcf4f-2cmg5\" (UID: \"ce13f2cf-2ff9-4178-a689-14514c8b0b37\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245712 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-etcd-serving-ca\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245732 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcwhp\" (UniqueName: \"kubernetes.io/projected/6fd95d6b-226e-4eef-a232-85205a89d877-kube-api-access-qcwhp\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245753 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-service-ca\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245795 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd95d6b-226e-4eef-a232-85205a89d877-serving-cert\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245814 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245833 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaeaea26-9884-4565-ade3-4fdbaba94cc6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7k5s4\" (UID: \"eaeaea26-9884-4565-ade3-4fdbaba94cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245853 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c449dd1-4e36-4f64-8d34-ec281a84f870-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245888 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c449dd1-4e36-4f64-8d34-ec281a84f870-audit-dir\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245909 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/85d596cf-88d9-4858-95a2-cfcae776651c-proxy-tls\") pod \"machine-config-controller-84d6567774-d8r69\" (UID: \"85d596cf-88d9-4858-95a2-cfcae776651c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245928 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc-auth-proxy-config\") pod \"machine-approver-56656f9798-v8zzg\" (UID: \"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245948 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-client-ca\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245970 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.245987 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-audit\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246008 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c449dd1-4e36-4f64-8d34-ec281a84f870-encryption-config\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246061 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2j6k\" (UniqueName: \"kubernetes.io/projected/bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc-kube-api-access-n2j6k\") pod \"machine-approver-56656f9798-v8zzg\" (UID: \"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246089 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-trusted-ca-bundle\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246117 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613468f4-6a02-4828-8873-01bccb4b2c43-config\") pod \"machine-api-operator-5694c8668f-jkf8p\" (UID: \"613468f4-6a02-4828-8873-01bccb4b2c43\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246139 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c449dd1-4e36-4f64-8d34-ec281a84f870-etcd-client\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246166 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5f6z\" (UniqueName: \"kubernetes.io/projected/cdf07083-6f82-49a7-9af9-b2d7aec76240-kube-api-access-m5f6z\") pod \"route-controller-manager-6576b87f9c-d5d8q\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246190 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c449dd1-4e36-4f64-8d34-ec281a84f870-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246216 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6fd95d6b-226e-4eef-a232-85205a89d877-encryption-config\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246242 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/613468f4-6a02-4828-8873-01bccb4b2c43-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jkf8p\" (UID: \"613468f4-6a02-4828-8873-01bccb4b2c43\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246268 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tgkc\" (UniqueName: \"kubernetes.io/projected/15b89539-dfb7-4d1b-9300-e04517c96486-kube-api-access-9tgkc\") pod \"openshift-config-operator-7777fb866f-52tkz\" (UID: \"15b89539-dfb7-4d1b-9300-e04517c96486\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246308 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-image-import-ca\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246381 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0437d205-eb04-4136-a158-01d8729c335c-console-serving-cert\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246456 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf07083-6f82-49a7-9af9-b2d7aec76240-config\") pod \"route-controller-manager-6576b87f9c-d5d8q\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246490 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d92wc\" (UniqueName: \"kubernetes.io/projected/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-kube-api-access-d92wc\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246508 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-config\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246524 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdf07083-6f82-49a7-9af9-b2d7aec76240-serving-cert\") pod \"route-controller-manager-6576b87f9c-d5d8q\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246545 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/be010eff-2ece-4d07-98e1-6c7d593d89b1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-x4djh\" (UID: \"be010eff-2ece-4d07-98e1-6c7d593d89b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246563 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a0616cf-bdcc-463d-8185-fd49b74cd419-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7zwxh\" (UID: \"9a0616cf-bdcc-463d-8185-fd49b74cd419\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246580 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bql87\" (UniqueName: \"kubernetes.io/projected/85d596cf-88d9-4858-95a2-cfcae776651c-kube-api-access-bql87\") pod \"machine-config-controller-84d6567774-d8r69\" (UID: \"85d596cf-88d9-4858-95a2-cfcae776651c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246875 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k74l4"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.246977 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6fd95d6b-226e-4eef-a232-85205a89d877-node-pullsecrets\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.251388 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-oauth-serving-cert\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.251734 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.251784 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4qkwc"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.251800 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.252710 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c449dd1-4e36-4f64-8d34-ec281a84f870-audit-policies\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.253006 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf07083-6f82-49a7-9af9-b2d7aec76240-config\") pod \"route-controller-manager-6576b87f9c-d5d8q\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.265900 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/15b89539-dfb7-4d1b-9300-e04517c96486-available-featuregates\") pod \"openshift-config-operator-7777fb866f-52tkz\" (UID: \"15b89539-dfb7-4d1b-9300-e04517c96486\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.267110 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.267258 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-console-config\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.267257 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-config\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.267949 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-etcd-serving-ca\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.268860 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-service-ca\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.274939 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fd95d6b-226e-4eef-a232-85205a89d877-audit-dir\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.276607 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-l4lt5"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.276678 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hcbkk"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.277072 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd95d6b-226e-4eef-a232-85205a89d877-serving-cert\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.278758 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-config\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.278803 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaeaea26-9884-4565-ade3-4fdbaba94cc6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7k5s4\" (UID: \"eaeaea26-9884-4565-ade3-4fdbaba94cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.279641 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0437d205-eb04-4136-a158-01d8729c335c-console-serving-cert\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.279960 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.282368 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b89539-dfb7-4d1b-9300-e04517c96486-serving-cert\") pod \"openshift-config-operator-7777fb866f-52tkz\" (UID: \"15b89539-dfb7-4d1b-9300-e04517c96486\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.282725 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6fd95d6b-226e-4eef-a232-85205a89d877-etcd-client\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.283173 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc-machine-approver-tls\") pod \"machine-approver-56656f9798-v8zzg\" (UID: \"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.283504 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.289965 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-serving-cert\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.290654 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c449dd1-4e36-4f64-8d34-ec281a84f870-serving-cert\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.292805 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-image-import-ca\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.292941 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0437d205-eb04-4136-a158-01d8729c335c-console-oauth-config\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.293359 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c449dd1-4e36-4f64-8d34-ec281a84f870-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.301006 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pfhtd"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.303434 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.303645 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c449dd1-4e36-4f64-8d34-ec281a84f870-audit-dir\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.303495 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6fd95d6b-226e-4eef-a232-85205a89d877-audit\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.304350 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc-auth-proxy-config\") pod \"machine-approver-56656f9798-v8zzg\" (UID: \"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.305158 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-client-ca\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.306534 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.306738 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.309890 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-trusted-ca-bundle\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.312422 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc-config\") pod \"machine-approver-56656f9798-v8zzg\" (UID: \"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.313771 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.316443 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c449dd1-4e36-4f64-8d34-ec281a84f870-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.328558 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdf07083-6f82-49a7-9af9-b2d7aec76240-client-ca\") pod \"route-controller-manager-6576b87f9c-d5d8q\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.331840 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jpl9f"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.332127 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-z8p7k"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.329369 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/613468f4-6a02-4828-8873-01bccb4b2c43-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jkf8p\" (UID: \"613468f4-6a02-4828-8873-01bccb4b2c43\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.333185 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6fd95d6b-226e-4eef-a232-85205a89d877-encryption-config\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.333598 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z8p7k" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.333785 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.334418 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdf07083-6f82-49a7-9af9-b2d7aec76240-serving-cert\") pod \"route-controller-manager-6576b87f9c-d5d8q\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.338690 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/be010eff-2ece-4d07-98e1-6c7d593d89b1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-x4djh\" (UID: \"be010eff-2ece-4d07-98e1-6c7d593d89b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.339124 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6qn99"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.340947 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaeaea26-9884-4565-ade3-4fdbaba94cc6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7k5s4\" (UID: \"eaeaea26-9884-4565-ade3-4fdbaba94cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.341177 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c449dd1-4e36-4f64-8d34-ec281a84f870-encryption-config\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.343763 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wbvqt"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.344967 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wbvqt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.346911 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-2v6sn"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.348158 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a0616cf-bdcc-463d-8185-fd49b74cd419-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7zwxh\" (UID: \"9a0616cf-bdcc-463d-8185-fd49b74cd419\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.348221 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bql87\" (UniqueName: \"kubernetes.io/projected/85d596cf-88d9-4858-95a2-cfcae776651c-kube-api-access-bql87\") pod \"machine-config-controller-84d6567774-d8r69\" (UID: \"85d596cf-88d9-4858-95a2-cfcae776651c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.348331 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce13f2cf-2ff9-4178-a689-14514c8b0b37-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2cmg5\" (UID: \"ce13f2cf-2ff9-4178-a689-14514c8b0b37\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.348369 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce13f2cf-2ff9-4178-a689-14514c8b0b37-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2cmg5\" (UID: \"ce13f2cf-2ff9-4178-a689-14514c8b0b37\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.348413 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85d596cf-88d9-4858-95a2-cfcae776651c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-d8r69\" (UID: \"85d596cf-88d9-4858-95a2-cfcae776651c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.348436 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a0616cf-bdcc-463d-8185-fd49b74cd419-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7zwxh\" (UID: \"9a0616cf-bdcc-463d-8185-fd49b74cd419\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.348446 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2v6sn" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.348486 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a0616cf-bdcc-463d-8185-fd49b74cd419-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7zwxh\" (UID: \"9a0616cf-bdcc-463d-8185-fd49b74cd419\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.348525 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnd5f\" (UniqueName: \"kubernetes.io/projected/ce13f2cf-2ff9-4178-a689-14514c8b0b37-kube-api-access-dnd5f\") pod \"openshift-apiserver-operator-796bbdcf4f-2cmg5\" (UID: \"ce13f2cf-2ff9-4178-a689-14514c8b0b37\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.348566 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/85d596cf-88d9-4858-95a2-cfcae776651c-proxy-tls\") pod \"machine-config-controller-84d6567774-d8r69\" (UID: \"85d596cf-88d9-4858-95a2-cfcae776651c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.349225 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce13f2cf-2ff9-4178-a689-14514c8b0b37-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2cmg5\" (UID: \"ce13f2cf-2ff9-4178-a689-14514c8b0b37\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.350796 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85d596cf-88d9-4858-95a2-cfcae776651c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-d8r69\" (UID: \"85d596cf-88d9-4858-95a2-cfcae776651c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.351677 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.351872 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c449dd1-4e36-4f64-8d34-ec281a84f870-etcd-client\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.353735 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce13f2cf-2ff9-4178-a689-14514c8b0b37-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2cmg5\" (UID: \"ce13f2cf-2ff9-4178-a689-14514c8b0b37\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.353945 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j26j4"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.359317 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.360346 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zzpsx"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.363007 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.363084 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.363287 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.365315 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.366753 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.368563 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.371117 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ftc62"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.376436 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.382231 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.383295 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qhkxq"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.384975 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.386282 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.388460 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2v6sn"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.389580 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z8p7k"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.390082 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.390775 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.393530 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g48p5"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.395820 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hsdhb"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.396887 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zzpsx"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.400646 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.402630 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gtqd7"] Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.410434 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.431005 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.450401 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.469621 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.489968 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.510653 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.520011 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.520087 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.520737 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.530388 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.550624 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.571366 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.591332 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.610749 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.629941 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.665290 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.670629 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.690496 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.709814 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.730026 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.758768 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.770830 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.790692 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.811433 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.831093 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.850719 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.870950 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.890485 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.902556 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/85d596cf-88d9-4858-95a2-cfcae776651c-proxy-tls\") pod \"machine-config-controller-84d6567774-d8r69\" (UID: \"85d596cf-88d9-4858-95a2-cfcae776651c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.910450 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.929646 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.950360 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.970403 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 24 08:55:41 crc kubenswrapper[4719]: I1124 08:55:41.990268 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.010819 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.029930 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.050193 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.070321 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.090724 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.109698 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.129946 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.148546 4719 request.go:700] Waited for 1.005025193s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&limit=500&resourceVersion=0 Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.150205 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.169945 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.182797 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a0616cf-bdcc-463d-8185-fd49b74cd419-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7zwxh\" (UID: \"9a0616cf-bdcc-463d-8185-fd49b74cd419\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.190873 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.199512 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a0616cf-bdcc-463d-8185-fd49b74cd419-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7zwxh\" (UID: \"9a0616cf-bdcc-463d-8185-fd49b74cd419\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.210233 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.230056 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.249871 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 24 08:55:42 crc kubenswrapper[4719]: E1124 08:55:42.267565 4719 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:55:42 crc kubenswrapper[4719]: E1124 08:55:42.267693 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/613468f4-6a02-4828-8873-01bccb4b2c43-images podName:613468f4-6a02-4828-8873-01bccb4b2c43 nodeName:}" failed. No retries permitted until 2025-11-24 08:55:42.767667235 +0000 UTC m=+119.098940487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/613468f4-6a02-4828-8873-01bccb4b2c43-images") pod "machine-api-operator-5694c8668f-jkf8p" (UID: "613468f4-6a02-4828-8873-01bccb4b2c43") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.270433 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.291349 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 24 08:55:42 crc kubenswrapper[4719]: E1124 08:55:42.309083 4719 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Nov 24 08:55:42 crc kubenswrapper[4719]: E1124 08:55:42.309194 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/613468f4-6a02-4828-8873-01bccb4b2c43-config podName:613468f4-6a02-4828-8873-01bccb4b2c43 nodeName:}" failed. No retries permitted until 2025-11-24 08:55:42.809172965 +0000 UTC m=+119.140446217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/613468f4-6a02-4828-8873-01bccb4b2c43-config") pod "machine-api-operator-5694c8668f-jkf8p" (UID: "613468f4-6a02-4828-8873-01bccb4b2c43") : failed to sync configmap cache: timed out waiting for the condition Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.310175 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.331130 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.350612 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.391332 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.410570 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.430450 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.450685 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.470241 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.490646 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.510926 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.520211 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.530515 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.549604 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.570192 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.591189 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.616301 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.630596 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.649185 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.670199 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.690174 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.710196 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.729997 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.766167 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxkm7\" (UniqueName: \"kubernetes.io/projected/eaeaea26-9884-4565-ade3-4fdbaba94cc6-kube-api-access-bxkm7\") pod \"openshift-controller-manager-operator-756b6f6bc6-7k5s4\" (UID: \"eaeaea26-9884-4565-ade3-4fdbaba94cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.773889 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/613468f4-6a02-4828-8873-01bccb4b2c43-images\") pod \"machine-api-operator-5694c8668f-jkf8p\" (UID: \"613468f4-6a02-4828-8873-01bccb4b2c43\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.784900 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d92wc\" (UniqueName: \"kubernetes.io/projected/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-kube-api-access-d92wc\") pod \"controller-manager-879f6c89f-hcbkk\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.804113 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.810580 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcwhp\" (UniqueName: \"kubernetes.io/projected/6fd95d6b-226e-4eef-a232-85205a89d877-kube-api-access-qcwhp\") pod \"apiserver-76f77b778f-fr4v7\" (UID: \"6fd95d6b-226e-4eef-a232-85205a89d877\") " pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.823306 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.853079 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6q5z\" (UniqueName: \"kubernetes.io/projected/0437d205-eb04-4136-a158-01d8729c335c-kube-api-access-c6q5z\") pod \"console-f9d7485db-l4lt5\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.867499 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pprs7\" (UniqueName: \"kubernetes.io/projected/f181c2b3-1876-4446-b16e-fbbaba6f7c95-kube-api-access-pprs7\") pod \"downloads-7954f5f757-bzb4s\" (UID: \"f181c2b3-1876-4446-b16e-fbbaba6f7c95\") " pod="openshift-console/downloads-7954f5f757-bzb4s" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.875946 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613468f4-6a02-4828-8873-01bccb4b2c43-config\") pod \"machine-api-operator-5694c8668f-jkf8p\" (UID: \"613468f4-6a02-4828-8873-01bccb4b2c43\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.888844 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g7q7\" (UniqueName: \"kubernetes.io/projected/5c449dd1-4e36-4f64-8d34-ec281a84f870-kube-api-access-2g7q7\") pod \"apiserver-7bbb656c7d-452z9\" (UID: \"5c449dd1-4e36-4f64-8d34-ec281a84f870\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.909324 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2j6k\" (UniqueName: \"kubernetes.io/projected/bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc-kube-api-access-n2j6k\") pod \"machine-approver-56656f9798-v8zzg\" (UID: \"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.928824 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.931321 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tgkc\" (UniqueName: \"kubernetes.io/projected/15b89539-dfb7-4d1b-9300-e04517c96486-kube-api-access-9tgkc\") pod \"openshift-config-operator-7777fb866f-52tkz\" (UID: \"15b89539-dfb7-4d1b-9300-e04517c96486\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.960667 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.961732 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5f6z\" (UniqueName: \"kubernetes.io/projected/cdf07083-6f82-49a7-9af9-b2d7aec76240-kube-api-access-m5f6z\") pod \"route-controller-manager-6576b87f9c-d5d8q\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.971562 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j4n9\" (UniqueName: \"kubernetes.io/projected/be010eff-2ece-4d07-98e1-6c7d593d89b1-kube-api-access-5j4n9\") pod \"cluster-samples-operator-665b6dd947-x4djh\" (UID: \"be010eff-2ece-4d07-98e1-6c7d593d89b1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.975416 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 24 08:55:42 crc kubenswrapper[4719]: I1124 08:55:42.991752 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.001023 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bzb4s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.017400 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.025328 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.031535 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.054324 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.070985 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.090498 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.110876 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bql87\" (UniqueName: \"kubernetes.io/projected/85d596cf-88d9-4858-95a2-cfcae776651c-kube-api-access-bql87\") pod \"machine-config-controller-84d6567774-d8r69\" (UID: \"85d596cf-88d9-4858-95a2-cfcae776651c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.114705 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.114775 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fr4v7"] Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.134699 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.141197 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.148006 4719 request.go:700] Waited for 1.799065136s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.153580 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.174997 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.182526 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.195819 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.197136 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hcbkk"] Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.228498 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a0616cf-bdcc-463d-8185-fd49b74cd419-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7zwxh\" (UID: \"9a0616cf-bdcc-463d-8185-fd49b74cd419\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.228712 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnd5f\" (UniqueName: \"kubernetes.io/projected/ce13f2cf-2ff9-4178-a689-14514c8b0b37-kube-api-access-dnd5f\") pod \"openshift-apiserver-operator-796bbdcf4f-2cmg5\" (UID: \"ce13f2cf-2ff9-4178-a689-14514c8b0b37\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.232667 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.254725 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.284670 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.289850 4719 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.293839 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.298983 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" event={"ID":"6fd95d6b-226e-4eef-a232-85205a89d877","Type":"ContainerStarted","Data":"3856cf852e698fdc052808fa9b59dd4dccd9840c9688fde1acbe9fcc78c878db"} Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.301407 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-l4lt5"] Nov 24 08:55:43 crc kubenswrapper[4719]: W1124 08:55:43.309456 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc33bec0_dcdd_4cb1_a872_5ad29dc1afbc.slice/crio-2397eb6375f312746d102b0c73d8afe4a647d7008401963b26e7c2fd494e9b02 WatchSource:0}: Error finding container 2397eb6375f312746d102b0c73d8afe4a647d7008401963b26e7c2fd494e9b02: Status 404 returned error can't find the container with id 2397eb6375f312746d102b0c73d8afe4a647d7008401963b26e7c2fd494e9b02 Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.320710 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.331978 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.340106 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.351072 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.371298 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.372637 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4"] Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.390697 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.410299 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.417454 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613468f4-6a02-4828-8873-01bccb4b2c43-config\") pod \"machine-api-operator-5694c8668f-jkf8p\" (UID: \"613468f4-6a02-4828-8873-01bccb4b2c43\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.431263 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 24 08:55:43 crc kubenswrapper[4719]: W1124 08:55:43.431254 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaeaea26_9884_4565_ade3_4fdbaba94cc6.slice/crio-a645e67ebcea9d25afdde6b0e40f5b35c1d345531a985ea9048ee842047a462b WatchSource:0}: Error finding container a645e67ebcea9d25afdde6b0e40f5b35c1d345531a985ea9048ee842047a462b: Status 404 returned error can't find the container with id a645e67ebcea9d25afdde6b0e40f5b35c1d345531a985ea9048ee842047a462b Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.441084 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh5j2\" (UniqueName: \"kubernetes.io/projected/613468f4-6a02-4828-8873-01bccb4b2c43-kube-api-access-jh5j2\") pod \"machine-api-operator-5694c8668f-jkf8p\" (UID: \"613468f4-6a02-4828-8873-01bccb4b2c43\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.472824 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.475797 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/613468f4-6a02-4828-8873-01bccb4b2c43-images\") pod \"machine-api-operator-5694c8668f-jkf8p\" (UID: \"613468f4-6a02-4828-8873-01bccb4b2c43\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.487566 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh"] Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.489714 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.490980 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-stats-auth\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491013 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc2ln\" (UniqueName: \"kubernetes.io/projected/8d21cb73-ee22-43f3-8824-393d3f6335b6-kube-api-access-jc2ln\") pod \"catalog-operator-68c6474976-xzgz5\" (UID: \"8d21cb73-ee22-43f3-8824-393d3f6335b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491160 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fba87784-a987-4620-b3ce-6ac015bbd4d1-etcd-service-ca\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491509 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f17021-823b-4f40-b34b-a94a6ab152b9-serving-cert\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491531 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d97493d9-bce3-4ee4-9e4b-5382442ad977-serving-cert\") pod \"console-operator-58897d9998-mn2gk\" (UID: \"d97493d9-bce3-4ee4-9e4b-5382442ad977\") " pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491586 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6739d077-6441-4b90-8e23-be9b0e3cb12a-trusted-ca\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491609 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ca69dfc-1cff-4287-81e4-d6aa55d77dcd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z58r9\" (UID: \"5ca69dfc-1cff-4287-81e4-d6aa55d77dcd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491662 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491682 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d97493d9-bce3-4ee4-9e4b-5382442ad977-trusted-ca\") pod \"console-operator-58897d9998-mn2gk\" (UID: \"d97493d9-bce3-4ee4-9e4b-5382442ad977\") " pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491740 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fba87784-a987-4620-b3ce-6ac015bbd4d1-serving-cert\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491796 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fba87784-a987-4620-b3ce-6ac015bbd4d1-etcd-ca\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491815 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbrpd\" (UniqueName: \"kubernetes.io/projected/f42a4caa-e790-4ec2-a6fd-28d97cafcf32-kube-api-access-rbrpd\") pod \"control-plane-machine-set-operator-78cbb6b69f-jpl9f\" (UID: \"f42a4caa-e790-4ec2-a6fd-28d97cafcf32\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jpl9f" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491874 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vzgbp\" (UID: \"a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491893 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl22w\" (UniqueName: \"kubernetes.io/projected/6bf07625-221a-4cb4-9fe2-520e8f0ee115-kube-api-access-sl22w\") pod \"migrator-59844c95c7-pfhtd\" (UID: \"6bf07625-221a-4cb4-9fe2-520e8f0ee115\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfhtd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491952 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-bound-sa-token\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.491974 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-default-certificate\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492300 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfqmv\" (UniqueName: \"kubernetes.io/projected/a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6-kube-api-access-nfqmv\") pod \"cluster-image-registry-operator-dc59b4c8b-vzgbp\" (UID: \"a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492376 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vzgbp\" (UID: \"a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492595 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492618 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492648 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-registry-tls\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492666 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fba87784-a987-4620-b3ce-6ac015bbd4d1-etcd-client\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492685 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5205f7bb-b9e5-4481-a789-63071edc127f-images\") pod \"machine-config-operator-74547568cd-n6v6j\" (UID: \"5205f7bb-b9e5-4481-a789-63071edc127f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492703 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8d21cb73-ee22-43f3-8824-393d3f6335b6-srv-cert\") pod \"catalog-operator-68c6474976-xzgz5\" (UID: \"8d21cb73-ee22-43f3-8824-393d3f6335b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492728 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/055c444e-b496-401b-b915-e8525733dd35-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wtgd6\" (UID: \"055c444e-b496-401b-b915-e8525733dd35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492760 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492781 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5205f7bb-b9e5-4481-a789-63071edc127f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-n6v6j\" (UID: \"5205f7bb-b9e5-4481-a789-63071edc127f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492800 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30a0bbd7-f318-46fe-a627-238dab2e710f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8fxzr\" (UID: \"30a0bbd7-f318-46fe-a627-238dab2e710f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492829 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f17021-823b-4f40-b34b-a94a6ab152b9-service-ca-bundle\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492852 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f8e3ee1-3f11-4e36-8063-d96db6c59a40-trusted-ca\") pod \"ingress-operator-5b745b69d9-m6fbn\" (UID: \"2f8e3ee1-3f11-4e36-8063-d96db6c59a40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492919 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/055c444e-b496-401b-b915-e8525733dd35-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wtgd6\" (UID: \"055c444e-b496-401b-b915-e8525733dd35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.492941 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba87784-a987-4620-b3ce-6ac015bbd4d1-config\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.493996 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494029 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8d21cb73-ee22-43f3-8824-393d3f6335b6-profile-collector-cert\") pod \"catalog-operator-68c6474976-xzgz5\" (UID: \"8d21cb73-ee22-43f3-8824-393d3f6335b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494094 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzmtk\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-kube-api-access-bzmtk\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494114 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj8dc\" (UniqueName: \"kubernetes.io/projected/70f17021-823b-4f40-b34b-a94a6ab152b9-kube-api-access-pj8dc\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494137 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd7sx\" (UniqueName: \"kubernetes.io/projected/5ca69dfc-1cff-4287-81e4-d6aa55d77dcd-kube-api-access-xd7sx\") pod \"kube-storage-version-migrator-operator-b67b599dd-z58r9\" (UID: \"5ca69dfc-1cff-4287-81e4-d6aa55d77dcd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494160 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2x7q\" (UniqueName: \"kubernetes.io/projected/510fbd10-427b-48c8-94ba-99f54e2227cc-kube-api-access-w2x7q\") pod \"package-server-manager-789f6589d5-mzg5s\" (UID: \"510fbd10-427b-48c8-94ba-99f54e2227cc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494180 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f17021-823b-4f40-b34b-a94a6ab152b9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494202 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d321c6c1-3a71-42d1-a7e0-96dec2c02fb3-metrics-tls\") pod \"dns-operator-744455d44c-4qkwc\" (UID: \"d321c6c1-3a71-42d1-a7e0-96dec2c02fb3\") " pod="openshift-dns-operator/dns-operator-744455d44c-4qkwc" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494228 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494252 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-metrics-certs\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494279 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-service-ca-bundle\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: E1124 08:55:43.494408 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:43.994385739 +0000 UTC m=+120.325659211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494674 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65m6z\" (UniqueName: \"kubernetes.io/projected/d97493d9-bce3-4ee4-9e4b-5382442ad977-kube-api-access-65m6z\") pod \"console-operator-58897d9998-mn2gk\" (UID: \"d97493d9-bce3-4ee4-9e4b-5382442ad977\") " pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494748 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f42a4caa-e790-4ec2-a6fd-28d97cafcf32-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jpl9f\" (UID: \"f42a4caa-e790-4ec2-a6fd-28d97cafcf32\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jpl9f" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494770 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494803 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ca69dfc-1cff-4287-81e4-d6aa55d77dcd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z58r9\" (UID: \"5ca69dfc-1cff-4287-81e4-d6aa55d77dcd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494820 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rslqc\" (UniqueName: \"kubernetes.io/projected/d321c6c1-3a71-42d1-a7e0-96dec2c02fb3-kube-api-access-rslqc\") pod \"dns-operator-744455d44c-4qkwc\" (UID: \"d321c6c1-3a71-42d1-a7e0-96dec2c02fb3\") " pod="openshift-dns-operator/dns-operator-744455d44c-4qkwc" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494836 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgz6t\" (UniqueName: \"kubernetes.io/projected/5205f7bb-b9e5-4481-a789-63071edc127f-kube-api-access-zgz6t\") pod \"machine-config-operator-74547568cd-n6v6j\" (UID: \"5205f7bb-b9e5-4481-a789-63071edc127f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494885 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxwdw\" (UniqueName: \"kubernetes.io/projected/fba87784-a987-4620-b3ce-6ac015bbd4d1-kube-api-access-xxwdw\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494901 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494919 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6739d077-6441-4b90-8e23-be9b0e3cb12a-registry-certificates\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494935 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055c444e-b496-401b-b915-e8525733dd35-config\") pod \"kube-apiserver-operator-766d6c64bb-wtgd6\" (UID: \"055c444e-b496-401b-b915-e8525733dd35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494950 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f8e3ee1-3f11-4e36-8063-d96db6c59a40-metrics-tls\") pod \"ingress-operator-5b745b69d9-m6fbn\" (UID: \"2f8e3ee1-3f11-4e36-8063-d96db6c59a40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494969 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5205f7bb-b9e5-4481-a789-63071edc127f-proxy-tls\") pod \"machine-config-operator-74547568cd-n6v6j\" (UID: \"5205f7bb-b9e5-4481-a789-63071edc127f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.494987 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30a0bbd7-f318-46fe-a627-238dab2e710f-config\") pod \"kube-controller-manager-operator-78b949d7b-8fxzr\" (UID: \"30a0bbd7-f318-46fe-a627-238dab2e710f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.495016 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f17021-823b-4f40-b34b-a94a6ab152b9-config\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496411 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a0bbd7-f318-46fe-a627-238dab2e710f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8fxzr\" (UID: \"30a0bbd7-f318-46fe-a627-238dab2e710f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496448 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-audit-policies\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496475 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496499 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6739d077-6441-4b90-8e23-be9b0e3cb12a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496517 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dftcj\" (UniqueName: \"kubernetes.io/projected/2f8e3ee1-3f11-4e36-8063-d96db6c59a40-kube-api-access-dftcj\") pod \"ingress-operator-5b745b69d9-m6fbn\" (UID: \"2f8e3ee1-3f11-4e36-8063-d96db6c59a40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496537 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6739d077-6441-4b90-8e23-be9b0e3cb12a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496557 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/510fbd10-427b-48c8-94ba-99f54e2227cc-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mzg5s\" (UID: \"510fbd10-427b-48c8-94ba-99f54e2227cc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496573 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f8e3ee1-3f11-4e36-8063-d96db6c59a40-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m6fbn\" (UID: \"2f8e3ee1-3f11-4e36-8063-d96db6c59a40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496592 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97493d9-bce3-4ee4-9e4b-5382442ad977-config\") pod \"console-operator-58897d9998-mn2gk\" (UID: \"d97493d9-bce3-4ee4-9e4b-5382442ad977\") " pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496608 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5ss4\" (UniqueName: \"kubernetes.io/projected/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-kube-api-access-s5ss4\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496631 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496682 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6818985a-ffd6-4447-bafe-624296df6660-audit-dir\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496701 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thxgp\" (UniqueName: \"kubernetes.io/projected/6818985a-ffd6-4447-bafe-624296df6660-kube-api-access-thxgp\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496720 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496742 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.496762 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vzgbp\" (UID: \"a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.515925 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bzb4s"] Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.516271 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 08:55:43 crc kubenswrapper[4719]: W1124 08:55:43.551394 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf181c2b3_1876_4446_b16e_fbbaba6f7c95.slice/crio-3ee61c1487fb9639276b6d89bdf8d4e1c8b628ffd810d6a0e470e1f1cda722ed WatchSource:0}: Error finding container 3ee61c1487fb9639276b6d89bdf8d4e1c8b628ffd810d6a0e470e1f1cda722ed: Status 404 returned error can't find the container with id 3ee61c1487fb9639276b6d89bdf8d4e1c8b628ffd810d6a0e470e1f1cda722ed Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.597652 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.597907 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.597971 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d97493d9-bce3-4ee4-9e4b-5382442ad977-trusted-ca\") pod \"console-operator-58897d9998-mn2gk\" (UID: \"d97493d9-bce3-4ee4-9e4b-5382442ad977\") " pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.597998 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fba87784-a987-4620-b3ce-6ac015bbd4d1-serving-cert\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.598019 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fba87784-a987-4620-b3ce-6ac015bbd4d1-etcd-ca\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.598062 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-csi-data-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.598089 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbrpd\" (UniqueName: \"kubernetes.io/projected/f42a4caa-e790-4ec2-a6fd-28d97cafcf32-kube-api-access-rbrpd\") pod \"control-plane-machine-set-operator-78cbb6b69f-jpl9f\" (UID: \"f42a4caa-e790-4ec2-a6fd-28d97cafcf32\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jpl9f" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.598116 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vzgbp\" (UID: \"a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.598141 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl22w\" (UniqueName: \"kubernetes.io/projected/6bf07625-221a-4cb4-9fe2-520e8f0ee115-kube-api-access-sl22w\") pod \"migrator-59844c95c7-pfhtd\" (UID: \"6bf07625-221a-4cb4-9fe2-520e8f0ee115\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfhtd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.598166 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwdgq\" (UniqueName: \"kubernetes.io/projected/c6fd0d0f-2097-474e-a6a9-528cb296457a-kube-api-access-nwdgq\") pod \"olm-operator-6b444d44fb-gjq99\" (UID: \"c6fd0d0f-2097-474e-a6a9-528cb296457a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.598188 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-socket-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.598211 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2dd3a2-e658-4127-a661-0590d998ea1c-config\") pod \"service-ca-operator-777779d784-ftc62\" (UID: \"7c2dd3a2-e658-4127-a661-0590d998ea1c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftc62" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.598248 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-bound-sa-token\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: E1124 08:55:43.603703 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:44.103657946 +0000 UTC m=+120.434931208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.603779 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dd37c67b-4f85-4ae8-b9ad-27d63aadca79-tmpfs\") pod \"packageserver-d55dfcdfc-ctxrd\" (UID: \"dd37c67b-4f85-4ae8-b9ad-27d63aadca79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.603832 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-default-certificate\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.603861 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gtqd7\" (UID: \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.603889 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c2dd3a2-e658-4127-a661-0590d998ea1c-serving-cert\") pod \"service-ca-operator-777779d784-ftc62\" (UID: \"7c2dd3a2-e658-4127-a661-0590d998ea1c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftc62" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.603915 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gtqd7\" (UID: \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.604011 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-526w2\" (UniqueName: \"kubernetes.io/projected/dd37c67b-4f85-4ae8-b9ad-27d63aadca79-kube-api-access-526w2\") pod \"packageserver-d55dfcdfc-ctxrd\" (UID: \"dd37c67b-4f85-4ae8-b9ad-27d63aadca79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.604062 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfqmv\" (UniqueName: \"kubernetes.io/projected/a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6-kube-api-access-nfqmv\") pod \"cluster-image-registry-operator-dc59b4c8b-vzgbp\" (UID: \"a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.604108 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vzgbp\" (UID: \"a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.604150 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.604179 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.605824 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fba87784-a987-4620-b3ce-6ac015bbd4d1-etcd-ca\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.612774 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vzgbp\" (UID: \"a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.614572 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-registry-tls\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.614693 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fba87784-a987-4620-b3ce-6ac015bbd4d1-etcd-client\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.614708 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.614724 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c299891d-a79d-40cb-bfda-074f6e9ea036-metrics-tls\") pod \"dns-default-z8p7k\" (UID: \"c299891d-a79d-40cb-bfda-074f6e9ea036\") " pod="openshift-dns/dns-default-z8p7k" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.617912 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5205f7bb-b9e5-4481-a789-63071edc127f-images\") pod \"machine-config-operator-74547568cd-n6v6j\" (UID: \"5205f7bb-b9e5-4481-a789-63071edc127f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.617995 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8d21cb73-ee22-43f3-8824-393d3f6335b6-srv-cert\") pod \"catalog-operator-68c6474976-xzgz5\" (UID: \"8d21cb73-ee22-43f3-8824-393d3f6335b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.618800 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5205f7bb-b9e5-4481-a789-63071edc127f-images\") pod \"machine-config-operator-74547568cd-n6v6j\" (UID: \"5205f7bb-b9e5-4481-a789-63071edc127f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.618879 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/055c444e-b496-401b-b915-e8525733dd35-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wtgd6\" (UID: \"055c444e-b496-401b-b915-e8525733dd35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.618948 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c6fd0d0f-2097-474e-a6a9-528cb296457a-srv-cert\") pod \"olm-operator-6b444d44fb-gjq99\" (UID: \"c6fd0d0f-2097-474e-a6a9-528cb296457a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.619192 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.619258 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5205f7bb-b9e5-4481-a789-63071edc127f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-n6v6j\" (UID: \"5205f7bb-b9e5-4481-a789-63071edc127f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.619304 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30a0bbd7-f318-46fe-a627-238dab2e710f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8fxzr\" (UID: \"30a0bbd7-f318-46fe-a627-238dab2e710f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.619983 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5205f7bb-b9e5-4481-a789-63071edc127f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-n6v6j\" (UID: \"5205f7bb-b9e5-4481-a789-63071edc127f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.621443 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fba87784-a987-4620-b3ce-6ac015bbd4d1-serving-cert\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.626193 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-default-certificate\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.626926 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vzgbp\" (UID: \"a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.627913 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.628554 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-mountpoint-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.628685 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f17021-823b-4f40-b34b-a94a6ab152b9-service-ca-bundle\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.628763 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-registry-tls\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.628949 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/003a05a0-7927-454d-97e6-935ee34279f5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qhkxq\" (UID: \"003a05a0-7927-454d-97e6-935ee34279f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qhkxq" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.629584 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f8e3ee1-3f11-4e36-8063-d96db6c59a40-trusted-ca\") pod \"ingress-operator-5b745b69d9-m6fbn\" (UID: \"2f8e3ee1-3f11-4e36-8063-d96db6c59a40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.629672 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fba87784-a987-4620-b3ce-6ac015bbd4d1-etcd-client\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.629968 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f17021-823b-4f40-b34b-a94a6ab152b9-service-ca-bundle\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630361 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0b74b9b-50d6-454d-b527-a5980f7d762e-config-volume\") pod \"collect-profiles-29399565-zmc2q\" (UID: \"c0b74b9b-50d6-454d-b527-a5980f7d762e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630428 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c299891d-a79d-40cb-bfda-074f6e9ea036-config-volume\") pod \"dns-default-z8p7k\" (UID: \"c299891d-a79d-40cb-bfda-074f6e9ea036\") " pod="openshift-dns/dns-default-z8p7k" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630464 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-registration-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630509 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbg5p\" (UniqueName: \"kubernetes.io/projected/7c2dd3a2-e658-4127-a661-0590d998ea1c-kube-api-access-zbg5p\") pod \"service-ca-operator-777779d784-ftc62\" (UID: \"7c2dd3a2-e658-4127-a661-0590d998ea1c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftc62" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630564 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/055c444e-b496-401b-b915-e8525733dd35-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wtgd6\" (UID: \"055c444e-b496-401b-b915-e8525733dd35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630607 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba87784-a987-4620-b3ce-6ac015bbd4d1-config\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630695 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630728 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8d21cb73-ee22-43f3-8824-393d3f6335b6-profile-collector-cert\") pod \"catalog-operator-68c6474976-xzgz5\" (UID: \"8d21cb73-ee22-43f3-8824-393d3f6335b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630771 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c6fd0d0f-2097-474e-a6a9-528cb296457a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-gjq99\" (UID: \"c6fd0d0f-2097-474e-a6a9-528cb296457a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630827 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzmtk\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-kube-api-access-bzmtk\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630859 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj8dc\" (UniqueName: \"kubernetes.io/projected/70f17021-823b-4f40-b34b-a94a6ab152b9-kube-api-access-pj8dc\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630902 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd7sx\" (UniqueName: \"kubernetes.io/projected/5ca69dfc-1cff-4287-81e4-d6aa55d77dcd-kube-api-access-xd7sx\") pod \"kube-storage-version-migrator-operator-b67b599dd-z58r9\" (UID: \"5ca69dfc-1cff-4287-81e4-d6aa55d77dcd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630934 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkdtz\" (UniqueName: \"kubernetes.io/projected/ee3d18bd-4007-4fac-952d-528cb25a90dd-kube-api-access-jkdtz\") pod \"service-ca-9c57cc56f-hsdhb\" (UID: \"ee3d18bd-4007-4fac-952d-528cb25a90dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsdhb" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630963 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0b74b9b-50d6-454d-b527-a5980f7d762e-secret-volume\") pod \"collect-profiles-29399565-zmc2q\" (UID: \"c0b74b9b-50d6-454d-b527-a5980f7d762e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.630997 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2x7q\" (UniqueName: \"kubernetes.io/projected/510fbd10-427b-48c8-94ba-99f54e2227cc-kube-api-access-w2x7q\") pod \"package-server-manager-789f6589d5-mzg5s\" (UID: \"510fbd10-427b-48c8-94ba-99f54e2227cc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631192 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f17021-823b-4f40-b34b-a94a6ab152b9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631256 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d321c6c1-3a71-42d1-a7e0-96dec2c02fb3-metrics-tls\") pod \"dns-operator-744455d44c-4qkwc\" (UID: \"d321c6c1-3a71-42d1-a7e0-96dec2c02fb3\") " pod="openshift-dns-operator/dns-operator-744455d44c-4qkwc" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631329 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631360 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/541155af-19a7-4438-9dc6-700d5ba1e889-cert\") pod \"ingress-canary-2v6sn\" (UID: \"541155af-19a7-4438-9dc6-700d5ba1e889\") " pod="openshift-ingress-canary/ingress-canary-2v6sn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631406 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-metrics-certs\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631451 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd37c67b-4f85-4ae8-b9ad-27d63aadca79-webhook-cert\") pod \"packageserver-d55dfcdfc-ctxrd\" (UID: \"dd37c67b-4f85-4ae8-b9ad-27d63aadca79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631507 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-service-ca-bundle\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631544 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zr8j\" (UniqueName: \"kubernetes.io/projected/732e3b35-79a1-47d8-bc13-44ddffb8de36-kube-api-access-8zr8j\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631605 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65m6z\" (UniqueName: \"kubernetes.io/projected/d97493d9-bce3-4ee4-9e4b-5382442ad977-kube-api-access-65m6z\") pod \"console-operator-58897d9998-mn2gk\" (UID: \"d97493d9-bce3-4ee4-9e4b-5382442ad977\") " pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631613 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f8e3ee1-3f11-4e36-8063-d96db6c59a40-trusted-ca\") pod \"ingress-operator-5b745b69d9-m6fbn\" (UID: \"2f8e3ee1-3f11-4e36-8063-d96db6c59a40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631645 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f42a4caa-e790-4ec2-a6fd-28d97cafcf32-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jpl9f\" (UID: \"f42a4caa-e790-4ec2-a6fd-28d97cafcf32\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jpl9f" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631701 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631736 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhxmn\" (UniqueName: \"kubernetes.io/projected/541155af-19a7-4438-9dc6-700d5ba1e889-kube-api-access-dhxmn\") pod \"ingress-canary-2v6sn\" (UID: \"541155af-19a7-4438-9dc6-700d5ba1e889\") " pod="openshift-ingress-canary/ingress-canary-2v6sn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631774 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0abda2dc-f505-4af1-be2e-fb5b3765bb23-certs\") pod \"machine-config-server-wbvqt\" (UID: \"0abda2dc-f505-4af1-be2e-fb5b3765bb23\") " pod="openshift-machine-config-operator/machine-config-server-wbvqt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631806 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd37c67b-4f85-4ae8-b9ad-27d63aadca79-apiservice-cert\") pod \"packageserver-d55dfcdfc-ctxrd\" (UID: \"dd37c67b-4f85-4ae8-b9ad-27d63aadca79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631866 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ca69dfc-1cff-4287-81e4-d6aa55d77dcd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z58r9\" (UID: \"5ca69dfc-1cff-4287-81e4-d6aa55d77dcd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631900 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rslqc\" (UniqueName: \"kubernetes.io/projected/d321c6c1-3a71-42d1-a7e0-96dec2c02fb3-kube-api-access-rslqc\") pod \"dns-operator-744455d44c-4qkwc\" (UID: \"d321c6c1-3a71-42d1-a7e0-96dec2c02fb3\") " pod="openshift-dns-operator/dns-operator-744455d44c-4qkwc" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631936 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgz6t\" (UniqueName: \"kubernetes.io/projected/5205f7bb-b9e5-4481-a789-63071edc127f-kube-api-access-zgz6t\") pod \"machine-config-operator-74547568cd-n6v6j\" (UID: \"5205f7bb-b9e5-4481-a789-63071edc127f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.631972 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0abda2dc-f505-4af1-be2e-fb5b3765bb23-node-bootstrap-token\") pod \"machine-config-server-wbvqt\" (UID: \"0abda2dc-f505-4af1-be2e-fb5b3765bb23\") " pod="openshift-machine-config-operator/machine-config-server-wbvqt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.632019 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxwdw\" (UniqueName: \"kubernetes.io/projected/fba87784-a987-4620-b3ce-6ac015bbd4d1-kube-api-access-xxwdw\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.634661 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d97493d9-bce3-4ee4-9e4b-5382442ad977-trusted-ca\") pod \"console-operator-58897d9998-mn2gk\" (UID: \"d97493d9-bce3-4ee4-9e4b-5382442ad977\") " pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.634917 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: E1124 08:55:43.638785 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:44.138722262 +0000 UTC m=+120.469995514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.641777 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ca69dfc-1cff-4287-81e4-d6aa55d77dcd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z58r9\" (UID: \"5ca69dfc-1cff-4287-81e4-d6aa55d77dcd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.642099 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.652647 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-service-ca-bundle\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.658073 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70f17021-823b-4f40-b34b-a94a6ab152b9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.662591 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8d21cb73-ee22-43f3-8824-393d3f6335b6-srv-cert\") pod \"catalog-operator-68c6474976-xzgz5\" (UID: \"8d21cb73-ee22-43f3-8824-393d3f6335b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.662879 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba87784-a987-4620-b3ce-6ac015bbd4d1-config\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.666552 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-bound-sa-token\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.670592 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-52tkz"] Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.681281 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.681407 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6739d077-6441-4b90-8e23-be9b0e3cb12a-registry-certificates\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.681482 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055c444e-b496-401b-b915-e8525733dd35-config\") pod \"kube-apiserver-operator-766d6c64bb-wtgd6\" (UID: \"055c444e-b496-401b-b915-e8525733dd35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.681517 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f8e3ee1-3f11-4e36-8063-d96db6c59a40-metrics-tls\") pod \"ingress-operator-5b745b69d9-m6fbn\" (UID: \"2f8e3ee1-3f11-4e36-8063-d96db6c59a40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.682362 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-metrics-certs\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.682545 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5205f7bb-b9e5-4481-a789-63071edc127f-proxy-tls\") pod \"machine-config-operator-74547568cd-n6v6j\" (UID: \"5205f7bb-b9e5-4481-a789-63071edc127f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.682586 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30a0bbd7-f318-46fe-a627-238dab2e710f-config\") pod \"kube-controller-manager-operator-78b949d7b-8fxzr\" (UID: \"30a0bbd7-f318-46fe-a627-238dab2e710f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.682971 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f17021-823b-4f40-b34b-a94a6ab152b9-config\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.683015 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlqjl\" (UniqueName: \"kubernetes.io/projected/c0b74b9b-50d6-454d-b527-a5980f7d762e-kube-api-access-hlqjl\") pod \"collect-profiles-29399565-zmc2q\" (UID: \"c0b74b9b-50d6-454d-b527-a5980f7d762e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.684118 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.686204 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6739d077-6441-4b90-8e23-be9b0e3cb12a-registry-certificates\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.687613 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.688399 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.689355 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/055c444e-b496-401b-b915-e8525733dd35-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wtgd6\" (UID: \"055c444e-b496-401b-b915-e8525733dd35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.690102 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f42a4caa-e790-4ec2-a6fd-28d97cafcf32-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jpl9f\" (UID: \"f42a4caa-e790-4ec2-a6fd-28d97cafcf32\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jpl9f" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.691370 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f17021-823b-4f40-b34b-a94a6ab152b9-config\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.692850 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.692923 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8d21cb73-ee22-43f3-8824-393d3f6335b6-profile-collector-cert\") pod \"catalog-operator-68c6474976-xzgz5\" (UID: \"8d21cb73-ee22-43f3-8824-393d3f6335b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.693768 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d321c6c1-3a71-42d1-a7e0-96dec2c02fb3-metrics-tls\") pod \"dns-operator-744455d44c-4qkwc\" (UID: \"d321c6c1-3a71-42d1-a7e0-96dec2c02fb3\") " pod="openshift-dns-operator/dns-operator-744455d44c-4qkwc" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.693910 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl22w\" (UniqueName: \"kubernetes.io/projected/6bf07625-221a-4cb4-9fe2-520e8f0ee115-kube-api-access-sl22w\") pod \"migrator-59844c95c7-pfhtd\" (UID: \"6bf07625-221a-4cb4-9fe2-520e8f0ee115\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfhtd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.695027 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5205f7bb-b9e5-4481-a789-63071edc127f-proxy-tls\") pod \"machine-config-operator-74547568cd-n6v6j\" (UID: \"5205f7bb-b9e5-4481-a789-63071edc127f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.696710 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a0bbd7-f318-46fe-a627-238dab2e710f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8fxzr\" (UID: \"30a0bbd7-f318-46fe-a627-238dab2e710f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.697642 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30a0bbd7-f318-46fe-a627-238dab2e710f-config\") pod \"kube-controller-manager-operator-78b949d7b-8fxzr\" (UID: \"30a0bbd7-f318-46fe-a627-238dab2e710f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.697823 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-audit-policies\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.697884 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.697924 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6739d077-6441-4b90-8e23-be9b0e3cb12a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.697954 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dftcj\" (UniqueName: \"kubernetes.io/projected/2f8e3ee1-3f11-4e36-8063-d96db6c59a40-kube-api-access-dftcj\") pod \"ingress-operator-5b745b69d9-m6fbn\" (UID: \"2f8e3ee1-3f11-4e36-8063-d96db6c59a40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.698418 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9"] Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.699447 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6739d077-6441-4b90-8e23-be9b0e3cb12a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.700376 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a0bbd7-f318-46fe-a627-238dab2e710f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8fxzr\" (UID: \"30a0bbd7-f318-46fe-a627-238dab2e710f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.700645 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/510fbd10-427b-48c8-94ba-99f54e2227cc-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mzg5s\" (UID: \"510fbd10-427b-48c8-94ba-99f54e2227cc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.700743 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f8e3ee1-3f11-4e36-8063-d96db6c59a40-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m6fbn\" (UID: \"2f8e3ee1-3f11-4e36-8063-d96db6c59a40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.700791 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ee3d18bd-4007-4fac-952d-528cb25a90dd-signing-key\") pod \"service-ca-9c57cc56f-hsdhb\" (UID: \"ee3d18bd-4007-4fac-952d-528cb25a90dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsdhb" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.700942 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6739d077-6441-4b90-8e23-be9b0e3cb12a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.700958 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7stdc\" (UniqueName: \"kubernetes.io/projected/0abda2dc-f505-4af1-be2e-fb5b3765bb23-kube-api-access-7stdc\") pod \"machine-config-server-wbvqt\" (UID: \"0abda2dc-f505-4af1-be2e-fb5b3765bb23\") " pod="openshift-machine-config-operator/machine-config-server-wbvqt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.701007 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crnmh\" (UniqueName: \"kubernetes.io/projected/c299891d-a79d-40cb-bfda-074f6e9ea036-kube-api-access-crnmh\") pod \"dns-default-z8p7k\" (UID: \"c299891d-a79d-40cb-bfda-074f6e9ea036\") " pod="openshift-dns/dns-default-z8p7k" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.701182 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97493d9-bce3-4ee4-9e4b-5382442ad977-config\") pod \"console-operator-58897d9998-mn2gk\" (UID: \"d97493d9-bce3-4ee4-9e4b-5382442ad977\") " pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.701224 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5ss4\" (UniqueName: \"kubernetes.io/projected/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-kube-api-access-s5ss4\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.701274 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdwpc\" (UniqueName: \"kubernetes.io/projected/003a05a0-7927-454d-97e6-935ee34279f5-kube-api-access-bdwpc\") pod \"multus-admission-controller-857f4d67dd-qhkxq\" (UID: \"003a05a0-7927-454d-97e6-935ee34279f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qhkxq" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.702136 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97493d9-bce3-4ee4-9e4b-5382442ad977-config\") pod \"console-operator-58897d9998-mn2gk\" (UID: \"d97493d9-bce3-4ee4-9e4b-5382442ad977\") " pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.702529 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.708786 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-audit-policies\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.709520 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4qzh\" (UniqueName: \"kubernetes.io/projected/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-kube-api-access-v4qzh\") pod \"marketplace-operator-79b997595-gtqd7\" (UID: \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.710774 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6818985a-ffd6-4447-bafe-624296df6660-audit-dir\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.710868 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thxgp\" (UniqueName: \"kubernetes.io/projected/6818985a-ffd6-4447-bafe-624296df6660-kube-api-access-thxgp\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.710898 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.710977 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-plugins-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.711017 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.711144 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vzgbp\" (UID: \"a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.711191 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-stats-auth\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.711257 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbrpd\" (UniqueName: \"kubernetes.io/projected/f42a4caa-e790-4ec2-a6fd-28d97cafcf32-kube-api-access-rbrpd\") pod \"control-plane-machine-set-operator-78cbb6b69f-jpl9f\" (UID: \"f42a4caa-e790-4ec2-a6fd-28d97cafcf32\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jpl9f" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.711309 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc2ln\" (UniqueName: \"kubernetes.io/projected/8d21cb73-ee22-43f3-8824-393d3f6335b6-kube-api-access-jc2ln\") pod \"catalog-operator-68c6474976-xzgz5\" (UID: \"8d21cb73-ee22-43f3-8824-393d3f6335b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.711339 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fba87784-a987-4620-b3ce-6ac015bbd4d1-etcd-service-ca\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.711342 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/510fbd10-427b-48c8-94ba-99f54e2227cc-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mzg5s\" (UID: \"510fbd10-427b-48c8-94ba-99f54e2227cc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.711360 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ee3d18bd-4007-4fac-952d-528cb25a90dd-signing-cabundle\") pod \"service-ca-9c57cc56f-hsdhb\" (UID: \"ee3d18bd-4007-4fac-952d-528cb25a90dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsdhb" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.711480 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.711493 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f17021-823b-4f40-b34b-a94a6ab152b9-serving-cert\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.711529 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d97493d9-bce3-4ee4-9e4b-5382442ad977-serving-cert\") pod \"console-operator-58897d9998-mn2gk\" (UID: \"d97493d9-bce3-4ee4-9e4b-5382442ad977\") " pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.712514 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/055c444e-b496-401b-b915-e8525733dd35-config\") pod \"kube-apiserver-operator-766d6c64bb-wtgd6\" (UID: \"055c444e-b496-401b-b915-e8525733dd35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.713420 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6739d077-6441-4b90-8e23-be9b0e3cb12a-trusted-ca\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.713497 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ca69dfc-1cff-4287-81e4-d6aa55d77dcd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z58r9\" (UID: \"5ca69dfc-1cff-4287-81e4-d6aa55d77dcd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.713544 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.715537 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6818985a-ffd6-4447-bafe-624296df6660-audit-dir\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.717516 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.718387 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6739d077-6441-4b90-8e23-be9b0e3cb12a-trusted-ca\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.719999 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fba87784-a987-4620-b3ce-6ac015bbd4d1-etcd-service-ca\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.721280 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ca69dfc-1cff-4287-81e4-d6aa55d77dcd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z58r9\" (UID: \"5ca69dfc-1cff-4287-81e4-d6aa55d77dcd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.729535 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f17021-823b-4f40-b34b-a94a6ab152b9-serving-cert\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.730285 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6739d077-6441-4b90-8e23-be9b0e3cb12a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.740618 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.751698 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d97493d9-bce3-4ee4-9e4b-5382442ad977-serving-cert\") pod \"console-operator-58897d9998-mn2gk\" (UID: \"d97493d9-bce3-4ee4-9e4b-5382442ad977\") " pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.755523 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/055c444e-b496-401b-b915-e8525733dd35-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wtgd6\" (UID: \"055c444e-b496-401b-b915-e8525733dd35\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.757804 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfqmv\" (UniqueName: \"kubernetes.io/projected/a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6-kube-api-access-nfqmv\") pod \"cluster-image-registry-operator-dc59b4c8b-vzgbp\" (UID: \"a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.767860 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-stats-auth\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.767977 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30a0bbd7-f318-46fe-a627-238dab2e710f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8fxzr\" (UID: \"30a0bbd7-f318-46fe-a627-238dab2e710f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.773559 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q"] Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.775300 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f8e3ee1-3f11-4e36-8063-d96db6c59a40-metrics-tls\") pod \"ingress-operator-5b745b69d9-m6fbn\" (UID: \"2f8e3ee1-3f11-4e36-8063-d96db6c59a40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.779243 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd7sx\" (UniqueName: \"kubernetes.io/projected/5ca69dfc-1cff-4287-81e4-d6aa55d77dcd-kube-api-access-xd7sx\") pod \"kube-storage-version-migrator-operator-b67b599dd-z58r9\" (UID: \"5ca69dfc-1cff-4287-81e4-d6aa55d77dcd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9" Nov 24 08:55:43 crc kubenswrapper[4719]: W1124 08:55:43.794592 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdf07083_6f82_49a7_9af9_b2d7aec76240.slice/crio-d6166b6f4774f422161aa803a066535d1aa583a1e480f05d5d6f76cc09099e1a WatchSource:0}: Error finding container d6166b6f4774f422161aa803a066535d1aa583a1e480f05d5d6f76cc09099e1a: Status 404 returned error can't find the container with id d6166b6f4774f422161aa803a066535d1aa583a1e480f05d5d6f76cc09099e1a Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.801247 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfhtd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.802561 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rslqc\" (UniqueName: \"kubernetes.io/projected/d321c6c1-3a71-42d1-a7e0-96dec2c02fb3-kube-api-access-rslqc\") pod \"dns-operator-744455d44c-4qkwc\" (UID: \"d321c6c1-3a71-42d1-a7e0-96dec2c02fb3\") " pod="openshift-dns-operator/dns-operator-744455d44c-4qkwc" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.814845 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:43 crc kubenswrapper[4719]: E1124 08:55:43.815085 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:44.314990724 +0000 UTC m=+120.646263986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815149 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4qzh\" (UniqueName: \"kubernetes.io/projected/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-kube-api-access-v4qzh\") pod \"marketplace-operator-79b997595-gtqd7\" (UID: \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815196 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-plugins-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815232 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ee3d18bd-4007-4fac-952d-528cb25a90dd-signing-cabundle\") pod \"service-ca-9c57cc56f-hsdhb\" (UID: \"ee3d18bd-4007-4fac-952d-528cb25a90dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsdhb" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815263 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-csi-data-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815289 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwdgq\" (UniqueName: \"kubernetes.io/projected/c6fd0d0f-2097-474e-a6a9-528cb296457a-kube-api-access-nwdgq\") pod \"olm-operator-6b444d44fb-gjq99\" (UID: \"c6fd0d0f-2097-474e-a6a9-528cb296457a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815310 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-socket-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815329 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2dd3a2-e658-4127-a661-0590d998ea1c-config\") pod \"service-ca-operator-777779d784-ftc62\" (UID: \"7c2dd3a2-e658-4127-a661-0590d998ea1c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftc62" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815359 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dd37c67b-4f85-4ae8-b9ad-27d63aadca79-tmpfs\") pod \"packageserver-d55dfcdfc-ctxrd\" (UID: \"dd37c67b-4f85-4ae8-b9ad-27d63aadca79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815381 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gtqd7\" (UID: \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815735 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c2dd3a2-e658-4127-a661-0590d998ea1c-serving-cert\") pod \"service-ca-operator-777779d784-ftc62\" (UID: \"7c2dd3a2-e658-4127-a661-0590d998ea1c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftc62" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815765 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gtqd7\" (UID: \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815784 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-526w2\" (UniqueName: \"kubernetes.io/projected/dd37c67b-4f85-4ae8-b9ad-27d63aadca79-kube-api-access-526w2\") pod \"packageserver-d55dfcdfc-ctxrd\" (UID: \"dd37c67b-4f85-4ae8-b9ad-27d63aadca79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815819 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c299891d-a79d-40cb-bfda-074f6e9ea036-metrics-tls\") pod \"dns-default-z8p7k\" (UID: \"c299891d-a79d-40cb-bfda-074f6e9ea036\") " pod="openshift-dns/dns-default-z8p7k" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815849 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c6fd0d0f-2097-474e-a6a9-528cb296457a-srv-cert\") pod \"olm-operator-6b444d44fb-gjq99\" (UID: \"c6fd0d0f-2097-474e-a6a9-528cb296457a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815876 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-mountpoint-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815903 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/003a05a0-7927-454d-97e6-935ee34279f5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qhkxq\" (UID: \"003a05a0-7927-454d-97e6-935ee34279f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qhkxq" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815930 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0b74b9b-50d6-454d-b527-a5980f7d762e-config-volume\") pod \"collect-profiles-29399565-zmc2q\" (UID: \"c0b74b9b-50d6-454d-b527-a5980f7d762e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815945 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c299891d-a79d-40cb-bfda-074f6e9ea036-config-volume\") pod \"dns-default-z8p7k\" (UID: \"c299891d-a79d-40cb-bfda-074f6e9ea036\") " pod="openshift-dns/dns-default-z8p7k" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815966 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-registration-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815985 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbg5p\" (UniqueName: \"kubernetes.io/projected/7c2dd3a2-e658-4127-a661-0590d998ea1c-kube-api-access-zbg5p\") pod \"service-ca-operator-777779d784-ftc62\" (UID: \"7c2dd3a2-e658-4127-a661-0590d998ea1c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftc62" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816014 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816065 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c6fd0d0f-2097-474e-a6a9-528cb296457a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-gjq99\" (UID: \"c6fd0d0f-2097-474e-a6a9-528cb296457a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816104 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkdtz\" (UniqueName: \"kubernetes.io/projected/ee3d18bd-4007-4fac-952d-528cb25a90dd-kube-api-access-jkdtz\") pod \"service-ca-9c57cc56f-hsdhb\" (UID: \"ee3d18bd-4007-4fac-952d-528cb25a90dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsdhb" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816120 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0b74b9b-50d6-454d-b527-a5980f7d762e-secret-volume\") pod \"collect-profiles-29399565-zmc2q\" (UID: \"c0b74b9b-50d6-454d-b527-a5980f7d762e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816143 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/541155af-19a7-4438-9dc6-700d5ba1e889-cert\") pod \"ingress-canary-2v6sn\" (UID: \"541155af-19a7-4438-9dc6-700d5ba1e889\") " pod="openshift-ingress-canary/ingress-canary-2v6sn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816164 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd37c67b-4f85-4ae8-b9ad-27d63aadca79-webhook-cert\") pod \"packageserver-d55dfcdfc-ctxrd\" (UID: \"dd37c67b-4f85-4ae8-b9ad-27d63aadca79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816181 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zr8j\" (UniqueName: \"kubernetes.io/projected/732e3b35-79a1-47d8-bc13-44ddffb8de36-kube-api-access-8zr8j\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816234 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhxmn\" (UniqueName: \"kubernetes.io/projected/541155af-19a7-4438-9dc6-700d5ba1e889-kube-api-access-dhxmn\") pod \"ingress-canary-2v6sn\" (UID: \"541155af-19a7-4438-9dc6-700d5ba1e889\") " pod="openshift-ingress-canary/ingress-canary-2v6sn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816250 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0abda2dc-f505-4af1-be2e-fb5b3765bb23-certs\") pod \"machine-config-server-wbvqt\" (UID: \"0abda2dc-f505-4af1-be2e-fb5b3765bb23\") " pod="openshift-machine-config-operator/machine-config-server-wbvqt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816275 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd37c67b-4f85-4ae8-b9ad-27d63aadca79-apiservice-cert\") pod \"packageserver-d55dfcdfc-ctxrd\" (UID: \"dd37c67b-4f85-4ae8-b9ad-27d63aadca79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816307 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0abda2dc-f505-4af1-be2e-fb5b3765bb23-node-bootstrap-token\") pod \"machine-config-server-wbvqt\" (UID: \"0abda2dc-f505-4af1-be2e-fb5b3765bb23\") " pod="openshift-machine-config-operator/machine-config-server-wbvqt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816338 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlqjl\" (UniqueName: \"kubernetes.io/projected/c0b74b9b-50d6-454d-b527-a5980f7d762e-kube-api-access-hlqjl\") pod \"collect-profiles-29399565-zmc2q\" (UID: \"c0b74b9b-50d6-454d-b527-a5980f7d762e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816380 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ee3d18bd-4007-4fac-952d-528cb25a90dd-signing-key\") pod \"service-ca-9c57cc56f-hsdhb\" (UID: \"ee3d18bd-4007-4fac-952d-528cb25a90dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsdhb" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816405 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7stdc\" (UniqueName: \"kubernetes.io/projected/0abda2dc-f505-4af1-be2e-fb5b3765bb23-kube-api-access-7stdc\") pod \"machine-config-server-wbvqt\" (UID: \"0abda2dc-f505-4af1-be2e-fb5b3765bb23\") " pod="openshift-machine-config-operator/machine-config-server-wbvqt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816420 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crnmh\" (UniqueName: \"kubernetes.io/projected/c299891d-a79d-40cb-bfda-074f6e9ea036-kube-api-access-crnmh\") pod \"dns-default-z8p7k\" (UID: \"c299891d-a79d-40cb-bfda-074f6e9ea036\") " pod="openshift-dns/dns-default-z8p7k" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816444 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdwpc\" (UniqueName: \"kubernetes.io/projected/003a05a0-7927-454d-97e6-935ee34279f5-kube-api-access-bdwpc\") pod \"multus-admission-controller-857f4d67dd-qhkxq\" (UID: \"003a05a0-7927-454d-97e6-935ee34279f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qhkxq" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.816608 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr" Nov 24 08:55:43 crc kubenswrapper[4719]: E1124 08:55:43.817843 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:44.317824994 +0000 UTC m=+120.649098246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.823533 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-csi-data-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.824397 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gtqd7\" (UID: \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.826856 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-mountpoint-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.827162 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c2dd3a2-e658-4127-a661-0590d998ea1c-config\") pod \"service-ca-operator-777779d784-ftc62\" (UID: \"7c2dd3a2-e658-4127-a661-0590d998ea1c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftc62" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.827390 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-socket-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.815935 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-plugins-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.827989 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dd37c67b-4f85-4ae8-b9ad-27d63aadca79-tmpfs\") pod \"packageserver-d55dfcdfc-ctxrd\" (UID: \"dd37c67b-4f85-4ae8-b9ad-27d63aadca79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.828204 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c299891d-a79d-40cb-bfda-074f6e9ea036-config-volume\") pod \"dns-default-z8p7k\" (UID: \"c299891d-a79d-40cb-bfda-074f6e9ea036\") " pod="openshift-dns/dns-default-z8p7k" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.829574 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/003a05a0-7927-454d-97e6-935ee34279f5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-qhkxq\" (UID: \"003a05a0-7927-454d-97e6-935ee34279f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qhkxq" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.829665 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/732e3b35-79a1-47d8-bc13-44ddffb8de36-registration-dir\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.834351 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgz6t\" (UniqueName: \"kubernetes.io/projected/5205f7bb-b9e5-4481-a789-63071edc127f-kube-api-access-zgz6t\") pod \"machine-config-operator-74547568cd-n6v6j\" (UID: \"5205f7bb-b9e5-4481-a789-63071edc127f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.834422 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0b74b9b-50d6-454d-b527-a5980f7d762e-config-volume\") pod \"collect-profiles-29399565-zmc2q\" (UID: \"c0b74b9b-50d6-454d-b527-a5980f7d762e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.834741 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxwdw\" (UniqueName: \"kubernetes.io/projected/fba87784-a987-4620-b3ce-6ac015bbd4d1-kube-api-access-xxwdw\") pod \"etcd-operator-b45778765-k74l4\" (UID: \"fba87784-a987-4620-b3ce-6ac015bbd4d1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.835749 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c299891d-a79d-40cb-bfda-074f6e9ea036-metrics-tls\") pod \"dns-default-z8p7k\" (UID: \"c299891d-a79d-40cb-bfda-074f6e9ea036\") " pod="openshift-dns/dns-default-z8p7k" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.841401 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jpl9f" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.842437 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c6fd0d0f-2097-474e-a6a9-528cb296457a-srv-cert\") pod \"olm-operator-6b444d44fb-gjq99\" (UID: \"c6fd0d0f-2097-474e-a6a9-528cb296457a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.843895 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gtqd7\" (UID: \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.845498 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.850319 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0b74b9b-50d6-454d-b527-a5980f7d762e-secret-volume\") pod \"collect-profiles-29399565-zmc2q\" (UID: \"c0b74b9b-50d6-454d-b527-a5980f7d762e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.855566 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0abda2dc-f505-4af1-be2e-fb5b3765bb23-certs\") pod \"machine-config-server-wbvqt\" (UID: \"0abda2dc-f505-4af1-be2e-fb5b3765bb23\") " pod="openshift-machine-config-operator/machine-config-server-wbvqt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.855650 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/541155af-19a7-4438-9dc6-700d5ba1e889-cert\") pod \"ingress-canary-2v6sn\" (UID: \"541155af-19a7-4438-9dc6-700d5ba1e889\") " pod="openshift-ingress-canary/ingress-canary-2v6sn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.856172 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dd37c67b-4f85-4ae8-b9ad-27d63aadca79-apiservice-cert\") pod \"packageserver-d55dfcdfc-ctxrd\" (UID: \"dd37c67b-4f85-4ae8-b9ad-27d63aadca79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.856570 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c2dd3a2-e658-4127-a661-0590d998ea1c-serving-cert\") pod \"service-ca-operator-777779d784-ftc62\" (UID: \"7c2dd3a2-e658-4127-a661-0590d998ea1c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftc62" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.856451 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dd37c67b-4f85-4ae8-b9ad-27d63aadca79-webhook-cert\") pod \"packageserver-d55dfcdfc-ctxrd\" (UID: \"dd37c67b-4f85-4ae8-b9ad-27d63aadca79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.857247 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c6fd0d0f-2097-474e-a6a9-528cb296457a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-gjq99\" (UID: \"c6fd0d0f-2097-474e-a6a9-528cb296457a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.857581 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ee3d18bd-4007-4fac-952d-528cb25a90dd-signing-cabundle\") pod \"service-ca-9c57cc56f-hsdhb\" (UID: \"ee3d18bd-4007-4fac-952d-528cb25a90dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsdhb" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.858994 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5"] Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.859852 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0abda2dc-f505-4af1-be2e-fb5b3765bb23-node-bootstrap-token\") pod \"machine-config-server-wbvqt\" (UID: \"0abda2dc-f505-4af1-be2e-fb5b3765bb23\") " pod="openshift-machine-config-operator/machine-config-server-wbvqt" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.861018 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ee3d18bd-4007-4fac-952d-528cb25a90dd-signing-key\") pod \"service-ca-9c57cc56f-hsdhb\" (UID: \"ee3d18bd-4007-4fac-952d-528cb25a90dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsdhb" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.861096 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2x7q\" (UniqueName: \"kubernetes.io/projected/510fbd10-427b-48c8-94ba-99f54e2227cc-kube-api-access-w2x7q\") pod \"package-server-manager-789f6589d5-mzg5s\" (UID: \"510fbd10-427b-48c8-94ba-99f54e2227cc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.866503 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.870481 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzmtk\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-kube-api-access-bzmtk\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.878502 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.886300 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69"] Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.893585 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.894345 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj8dc\" (UniqueName: \"kubernetes.io/projected/70f17021-823b-4f40-b34b-a94a6ab152b9-kube-api-access-pj8dc\") pod \"authentication-operator-69f744f599-6qn99\" (UID: \"70f17021-823b-4f40-b34b-a94a6ab152b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.914395 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65m6z\" (UniqueName: \"kubernetes.io/projected/d97493d9-bce3-4ee4-9e4b-5382442ad977-kube-api-access-65m6z\") pod \"console-operator-58897d9998-mn2gk\" (UID: \"d97493d9-bce3-4ee4-9e4b-5382442ad977\") " pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.924795 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:43 crc kubenswrapper[4719]: E1124 08:55:43.925852 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:44.425799564 +0000 UTC m=+120.757072816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.935446 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dftcj\" (UniqueName: \"kubernetes.io/projected/2f8e3ee1-3f11-4e36-8063-d96db6c59a40-kube-api-access-dftcj\") pod \"ingress-operator-5b745b69d9-m6fbn\" (UID: \"2f8e3ee1-3f11-4e36-8063-d96db6c59a40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.952180 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2f8e3ee1-3f11-4e36-8063-d96db6c59a40-bound-sa-token\") pod \"ingress-operator-5b745b69d9-m6fbn\" (UID: \"2f8e3ee1-3f11-4e36-8063-d96db6c59a40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.987077 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5ss4\" (UniqueName: \"kubernetes.io/projected/8389675e-5e4d-40d2-a5c8-b3e3587bf67e-kube-api-access-s5ss4\") pod \"router-default-5444994796-4887s\" (UID: \"8389675e-5e4d-40d2-a5c8-b3e3587bf67e\") " pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:43 crc kubenswrapper[4719]: I1124 08:55:43.991937 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh"] Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.017403 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thxgp\" (UniqueName: \"kubernetes.io/projected/6818985a-ffd6-4447-bafe-624296df6660-kube-api-access-thxgp\") pod \"oauth-openshift-558db77b4-g48p5\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.027794 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:44 crc kubenswrapper[4719]: E1124 08:55:44.028370 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:44.528352589 +0000 UTC m=+120.859625841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.028763 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4qkwc" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.033655 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vzgbp\" (UID: \"a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" Nov 24 08:55:44 crc kubenswrapper[4719]: W1124 08:55:44.035744 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0616cf_bdcc_463d_8185_fd49b74cd419.slice/crio-3177f5a8b02d7aef93798d39fa03f11bb88960132e0a5a864006033bd6d8db98 WatchSource:0}: Error finding container 3177f5a8b02d7aef93798d39fa03f11bb88960132e0a5a864006033bd6d8db98: Status 404 returned error can't find the container with id 3177f5a8b02d7aef93798d39fa03f11bb88960132e0a5a864006033bd6d8db98 Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.042120 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.058574 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.061593 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc2ln\" (UniqueName: \"kubernetes.io/projected/8d21cb73-ee22-43f3-8824-393d3f6335b6-kube-api-access-jc2ln\") pod \"catalog-operator-68c6474976-xzgz5\" (UID: \"8d21cb73-ee22-43f3-8824-393d3f6335b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.072499 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.080698 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4qzh\" (UniqueName: \"kubernetes.io/projected/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-kube-api-access-v4qzh\") pod \"marketplace-operator-79b997595-gtqd7\" (UID: \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\") " pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.081977 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.104630 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwdgq\" (UniqueName: \"kubernetes.io/projected/c6fd0d0f-2097-474e-a6a9-528cb296457a-kube-api-access-nwdgq\") pod \"olm-operator-6b444d44fb-gjq99\" (UID: \"c6fd0d0f-2097-474e-a6a9-528cb296457a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.108632 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.109861 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbg5p\" (UniqueName: \"kubernetes.io/projected/7c2dd3a2-e658-4127-a661-0590d998ea1c-kube-api-access-zbg5p\") pod \"service-ca-operator-777779d784-ftc62\" (UID: \"7c2dd3a2-e658-4127-a661-0590d998ea1c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftc62" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.110461 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jkf8p"] Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.128085 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.128529 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:44 crc kubenswrapper[4719]: E1124 08:55:44.128657 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:44.628640119 +0000 UTC m=+120.959913371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.128929 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:44 crc kubenswrapper[4719]: E1124 08:55:44.129456 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:44.629445112 +0000 UTC m=+120.960718364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.134949 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7stdc\" (UniqueName: \"kubernetes.io/projected/0abda2dc-f505-4af1-be2e-fb5b3765bb23-kube-api-access-7stdc\") pod \"machine-config-server-wbvqt\" (UID: \"0abda2dc-f505-4af1-be2e-fb5b3765bb23\") " pod="openshift-machine-config-operator/machine-config-server-wbvqt" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.155564 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crnmh\" (UniqueName: \"kubernetes.io/projected/c299891d-a79d-40cb-bfda-074f6e9ea036-kube-api-access-crnmh\") pod \"dns-default-z8p7k\" (UID: \"c299891d-a79d-40cb-bfda-074f6e9ea036\") " pod="openshift-dns/dns-default-z8p7k" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.190850 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdwpc\" (UniqueName: \"kubernetes.io/projected/003a05a0-7927-454d-97e6-935ee34279f5-kube-api-access-bdwpc\") pod \"multus-admission-controller-857f4d67dd-qhkxq\" (UID: \"003a05a0-7927-454d-97e6-935ee34279f5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-qhkxq" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.199961 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkdtz\" (UniqueName: \"kubernetes.io/projected/ee3d18bd-4007-4fac-952d-528cb25a90dd-kube-api-access-jkdtz\") pod \"service-ca-9c57cc56f-hsdhb\" (UID: \"ee3d18bd-4007-4fac-952d-528cb25a90dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-hsdhb" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.209306 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" Nov 24 08:55:44 crc kubenswrapper[4719]: W1124 08:55:44.213487 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod613468f4_6a02_4828_8873_01bccb4b2c43.slice/crio-252c5c409a97652caa0dec2e919fd67630f8e2f4af84e014a13f3d5bcb8b9738 WatchSource:0}: Error finding container 252c5c409a97652caa0dec2e919fd67630f8e2f4af84e014a13f3d5bcb8b9738: Status 404 returned error can't find the container with id 252c5c409a97652caa0dec2e919fd67630f8e2f4af84e014a13f3d5bcb8b9738 Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.213656 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.217846 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-526w2\" (UniqueName: \"kubernetes.io/projected/dd37c67b-4f85-4ae8-b9ad-27d63aadca79-kube-api-access-526w2\") pod \"packageserver-d55dfcdfc-ctxrd\" (UID: \"dd37c67b-4f85-4ae8-b9ad-27d63aadca79\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.227883 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-qhkxq" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.229677 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.233308 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:44 crc kubenswrapper[4719]: E1124 08:55:44.233834 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:44.733808899 +0000 UTC m=+121.065082151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.242399 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zr8j\" (UniqueName: \"kubernetes.io/projected/732e3b35-79a1-47d8-bc13-44ddffb8de36-kube-api-access-8zr8j\") pod \"csi-hostpathplugin-zzpsx\" (UID: \"732e3b35-79a1-47d8-bc13-44ddffb8de36\") " pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.242998 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftc62" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.247395 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hsdhb" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.255659 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhxmn\" (UniqueName: \"kubernetes.io/projected/541155af-19a7-4438-9dc6-700d5ba1e889-kube-api-access-dhxmn\") pod \"ingress-canary-2v6sn\" (UID: \"541155af-19a7-4438-9dc6-700d5ba1e889\") " pod="openshift-ingress-canary/ingress-canary-2v6sn" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.256062 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.256500 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.277490 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z8p7k" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.283653 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wbvqt" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.300718 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2v6sn" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.325618 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.334913 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:44 crc kubenswrapper[4719]: E1124 08:55:44.335394 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:44.835377587 +0000 UTC m=+121.166650839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.348455 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlqjl\" (UniqueName: \"kubernetes.io/projected/c0b74b9b-50d6-454d-b527-a5980f7d762e-kube-api-access-hlqjl\") pod \"collect-profiles-29399565-zmc2q\" (UID: \"c0b74b9b-50d6-454d-b527-a5980f7d762e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.384249 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5" event={"ID":"ce13f2cf-2ff9-4178-a689-14514c8b0b37","Type":"ContainerStarted","Data":"b4e55ba25aa475943fccc185ac268c975fecf09f25bd3f6b86aa6bb2e12827ea"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.388248 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" event={"ID":"85d596cf-88d9-4858-95a2-cfcae776651c","Type":"ContainerStarted","Data":"84ebc2607623855590abb0bff8ccfbfce18d76af3851f686bf90ed31130480d0"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.428073 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr"] Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.436298 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:44 crc kubenswrapper[4719]: E1124 08:55:44.436766 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:44.936743909 +0000 UTC m=+121.268017161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.451557 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh" event={"ID":"9a0616cf-bdcc-463d-8185-fd49b74cd419","Type":"ContainerStarted","Data":"3177f5a8b02d7aef93798d39fa03f11bb88960132e0a5a864006033bd6d8db98"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.495490 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" event={"ID":"5c449dd1-4e36-4f64-8d34-ec281a84f870","Type":"ContainerStarted","Data":"12c0b0859d2dda802a2d1e5212b2651de32adf8bbb74e4e6f25a564e769df887"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.553880 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:44 crc kubenswrapper[4719]: E1124 08:55:44.555785 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:45.055761762 +0000 UTC m=+121.387035014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.600425 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.606205 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh" event={"ID":"be010eff-2ece-4d07-98e1-6c7d593d89b1","Type":"ContainerStarted","Data":"2db738c49c31691f4c9c76c04b64f37548e01bbed6d799c87f1908c760696108"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.611160 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.645530 4719 generic.go:334] "Generic (PLEG): container finished" podID="6fd95d6b-226e-4eef-a232-85205a89d877" containerID="3662b446a03ed0730580fced4035c2f416a482845299720c4abe0e12dc451de3" exitCode=0 Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.683006 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:44 crc kubenswrapper[4719]: E1124 08:55:44.683389 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:45.18335978 +0000 UTC m=+121.514633032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.688393 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:44 crc kubenswrapper[4719]: E1124 08:55:44.688890 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:45.188874647 +0000 UTC m=+121.520147899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.739180 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" event={"ID":"6fd95d6b-226e-4eef-a232-85205a89d877","Type":"ContainerDied","Data":"3662b446a03ed0730580fced4035c2f416a482845299720c4abe0e12dc451de3"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.739633 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" event={"ID":"613468f4-6a02-4828-8873-01bccb4b2c43","Type":"ContainerStarted","Data":"252c5c409a97652caa0dec2e919fd67630f8e2f4af84e014a13f3d5bcb8b9738"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.781489 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" event={"ID":"15b89539-dfb7-4d1b-9300-e04517c96486","Type":"ContainerStarted","Data":"29eedea98bda46126392518086a965b59d6bb0266283b09566680a72d696e8ba"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.781553 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" event={"ID":"15b89539-dfb7-4d1b-9300-e04517c96486","Type":"ContainerStarted","Data":"4321b38af3b41dde703707ea7122901e1f69bc761877f0ca0a701c6663d9287e"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.793681 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" event={"ID":"5ed2027e-eab9-48cc-a501-e6ff6ce80e92","Type":"ContainerStarted","Data":"1f460ecf9d691459441a17492aba54fd31c186df64101d40043e3cbcd01684e8"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.793767 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" event={"ID":"5ed2027e-eab9-48cc-a501-e6ff6ce80e92","Type":"ContainerStarted","Data":"3e59dc7e3e8d9a0e81f83f454f91c07a3a4091a4e115cb98c0341c1aef16ea5b"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.793699 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:44 crc kubenswrapper[4719]: E1124 08:55:44.793835 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:45.29380789 +0000 UTC m=+121.625081142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.794812 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.795137 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:44 crc kubenswrapper[4719]: E1124 08:55:44.797473 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:45.297278748 +0000 UTC m=+121.628551990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.799459 4719 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hcbkk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.800506 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" podUID="5ed2027e-eab9-48cc-a501-e6ff6ce80e92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.826591 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6"] Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.833214 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" event={"ID":"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc","Type":"ContainerStarted","Data":"c89df1b2a31457a77c7ffa9af91e002df5720cb4dacb9a4a6178f3bc178d8a1c"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.833280 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" event={"ID":"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc","Type":"ContainerStarted","Data":"2397eb6375f312746d102b0c73d8afe4a647d7008401963b26e7c2fd494e9b02"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.862239 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l4lt5" event={"ID":"0437d205-eb04-4136-a158-01d8729c335c","Type":"ContainerStarted","Data":"a62b38dfa0bf45409b3e765d1b60c7c290d1a3dcb239f90ff71126c9c92dcb74"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.862338 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l4lt5" event={"ID":"0437d205-eb04-4136-a158-01d8729c335c","Type":"ContainerStarted","Data":"93ddeda68f6e0ebcf7acb04a42be3bef2deab6d8683cd2b740346e11ccb18960"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.868876 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9"] Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.874801 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bzb4s" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.874852 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bzb4s" event={"ID":"f181c2b3-1876-4446-b16e-fbbaba6f7c95","Type":"ContainerStarted","Data":"07651ba2ea1f0cc9d78f216f1536d255d99f1fb6da33f4d6184beee3b9a249d5"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.874883 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bzb4s" event={"ID":"f181c2b3-1876-4446-b16e-fbbaba6f7c95","Type":"ContainerStarted","Data":"3ee61c1487fb9639276b6d89bdf8d4e1c8b628ffd810d6a0e470e1f1cda722ed"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.874913 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.874987 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.897323 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pfhtd"] Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.898076 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:44 crc kubenswrapper[4719]: E1124 08:55:44.900898 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:45.400827202 +0000 UTC m=+121.732100594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.909508 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" event={"ID":"cdf07083-6f82-49a7-9af9-b2d7aec76240","Type":"ContainerStarted","Data":"d6166b6f4774f422161aa803a066535d1aa583a1e480f05d5d6f76cc09099e1a"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.915066 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.917392 4719 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-d5d8q container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.919658 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" podUID="cdf07083-6f82-49a7-9af9-b2d7aec76240" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.925570 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:44 crc kubenswrapper[4719]: E1124 08:55:44.931046 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:45.4310115 +0000 UTC m=+121.762284762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.962092 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4" event={"ID":"eaeaea26-9884-4565-ade3-4fdbaba94cc6","Type":"ContainerStarted","Data":"d09bffaa48bb3bbc6fcc683f62754f3dbd60a99a01f74e7457456d1d0945edb1"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.962249 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4" event={"ID":"eaeaea26-9884-4565-ade3-4fdbaba94cc6","Type":"ContainerStarted","Data":"a645e67ebcea9d25afdde6b0e40f5b35c1d345531a985ea9048ee842047a462b"} Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.965272 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jpl9f"] Nov 24 08:55:44 crc kubenswrapper[4719]: I1124 08:55:44.984179 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j"] Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.026797 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bzb4s" podStartSLOduration=99.026773103 podStartE2EDuration="1m39.026773103s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:44.976307248 +0000 UTC m=+121.307580530" watchObservedRunningTime="2025-11-24 08:55:45.026773103 +0000 UTC m=+121.358046355" Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.031664 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:45 crc kubenswrapper[4719]: W1124 08:55:45.033249 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod055c444e_b496_401b_b915_e8525733dd35.slice/crio-5a72a4b2c607e56c26a253502b3634bec50e8783ef737c5158153c3bf58c4ef3 WatchSource:0}: Error finding container 5a72a4b2c607e56c26a253502b3634bec50e8783ef737c5158153c3bf58c4ef3: Status 404 returned error can't find the container with id 5a72a4b2c607e56c26a253502b3634bec50e8783ef737c5158153c3bf58c4ef3 Nov 24 08:55:45 crc kubenswrapper[4719]: E1124 08:55:45.034257 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:45.534224044 +0000 UTC m=+121.865497306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.070023 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s"] Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.134445 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:45 crc kubenswrapper[4719]: E1124 08:55:45.134964 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:45.634945198 +0000 UTC m=+121.966218460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.238197 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:45 crc kubenswrapper[4719]: E1124 08:55:45.238871 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:45.738558743 +0000 UTC m=+122.069831995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.265737 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6qn99"] Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.325373 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn"] Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.341106 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:45 crc kubenswrapper[4719]: E1124 08:55:45.341611 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:45.841593093 +0000 UTC m=+122.172866345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.374018 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k74l4"] Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.446404 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:45 crc kubenswrapper[4719]: E1124 08:55:45.447528 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:45.947484493 +0000 UTC m=+122.278757895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.447666 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:45 crc kubenswrapper[4719]: E1124 08:55:45.448484 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:45.948471551 +0000 UTC m=+122.279744803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:45 crc kubenswrapper[4719]: W1124 08:55:45.483467 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70f17021_823b_4f40_b34b_a94a6ab152b9.slice/crio-3ef492a5dd3e4954b9e4c30cb6e29b26866de97f61b8f98fa0ce2267928d68a8 WatchSource:0}: Error finding container 3ef492a5dd3e4954b9e4c30cb6e29b26866de97f61b8f98fa0ce2267928d68a8: Status 404 returned error can't find the container with id 3ef492a5dd3e4954b9e4c30cb6e29b26866de97f61b8f98fa0ce2267928d68a8 Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.550433 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:45 crc kubenswrapper[4719]: E1124 08:55:45.551011 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:46.050982265 +0000 UTC m=+122.382255517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:45 crc kubenswrapper[4719]: W1124 08:55:45.558230 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfba87784_a987_4620_b3ce_6ac015bbd4d1.slice/crio-6dff247b5e3eb94dce876c5ca7606be2d68dc9cc680e4cca9f800150a4d3499a WatchSource:0}: Error finding container 6dff247b5e3eb94dce876c5ca7606be2d68dc9cc680e4cca9f800150a4d3499a: Status 404 returned error can't find the container with id 6dff247b5e3eb94dce876c5ca7606be2d68dc9cc680e4cca9f800150a4d3499a Nov 24 08:55:45 crc kubenswrapper[4719]: W1124 08:55:45.577696 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f8e3ee1_3f11_4e36_8063_d96db6c59a40.slice/crio-f15d189dea86ff80e9670aca9458ad4411e6a084254171db4e96e7f2de1c1342 WatchSource:0}: Error finding container f15d189dea86ff80e9670aca9458ad4411e6a084254171db4e96e7f2de1c1342: Status 404 returned error can't find the container with id f15d189dea86ff80e9670aca9458ad4411e6a084254171db4e96e7f2de1c1342 Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.648244 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mn2gk"] Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.648380 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" podStartSLOduration=99.647814388 podStartE2EDuration="1m39.647814388s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:45.637231987 +0000 UTC m=+121.968505259" watchObservedRunningTime="2025-11-24 08:55:45.647814388 +0000 UTC m=+121.979087640" Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.652948 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:45 crc kubenswrapper[4719]: E1124 08:55:45.654654 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:46.154633922 +0000 UTC m=+122.485907174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.662846 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4qkwc"] Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.760267 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:45 crc kubenswrapper[4719]: E1124 08:55:45.760503 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:46.26044359 +0000 UTC m=+122.591716842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.761014 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:45 crc kubenswrapper[4719]: E1124 08:55:45.761570 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:46.261558542 +0000 UTC m=+122.592831794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.799415 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-l4lt5" podStartSLOduration=99.799385247 podStartE2EDuration="1m39.799385247s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:45.79457357 +0000 UTC m=+122.125846832" watchObservedRunningTime="2025-11-24 08:55:45.799385247 +0000 UTC m=+122.130658499" Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.863255 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:45 crc kubenswrapper[4719]: E1124 08:55:45.864950 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:46.36491595 +0000 UTC m=+122.696189202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.923968 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g48p5"] Nov 24 08:55:45 crc kubenswrapper[4719]: I1124 08:55:45.968447 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:45 crc kubenswrapper[4719]: E1124 08:55:45.979721 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:46.479690883 +0000 UTC m=+122.810964135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.001578 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z8p7k"] Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.027581 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5"] Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.042360 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9" event={"ID":"5ca69dfc-1cff-4287-81e4-d6aa55d77dcd","Type":"ContainerStarted","Data":"a1207abccc1408b89005a7823e2b0d6fa73098fbf2f554f0ac9838fcf38ebb4b"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.046235 4719 generic.go:334] "Generic (PLEG): container finished" podID="5c449dd1-4e36-4f64-8d34-ec281a84f870" containerID="4ed73e43ab6d4791a69ac75c6a2edc4e425d760ff245352d69a9df856fa5e6ee" exitCode=0 Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.046679 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" event={"ID":"5c449dd1-4e36-4f64-8d34-ec281a84f870","Type":"ContainerDied","Data":"4ed73e43ab6d4791a69ac75c6a2edc4e425d760ff245352d69a9df856fa5e6ee"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.072393 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:46 crc kubenswrapper[4719]: E1124 08:55:46.072969 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:46.572944814 +0000 UTC m=+122.904218066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.089317 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" podStartSLOduration=100.089281049 podStartE2EDuration="1m40.089281049s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:46.084252406 +0000 UTC m=+122.415525678" watchObservedRunningTime="2025-11-24 08:55:46.089281049 +0000 UTC m=+122.420554321" Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.107610 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4887s" event={"ID":"8389675e-5e4d-40d2-a5c8-b3e3587bf67e","Type":"ContainerStarted","Data":"ad5ae8df35cd6b9d1bb60da49e6c45a6ef3cfde4d80c0a3e84d1687d444b70c9"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.121483 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp"] Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.121550 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfhtd" event={"ID":"6bf07625-221a-4cb4-9fe2-520e8f0ee115","Type":"ContainerStarted","Data":"3a07ad4f20c7c364e0d6d1e4847c6c5d8040b7ad6de3a70f660491efe7099abf"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.134257 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-qhkxq"] Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.151639 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5" event={"ID":"ce13f2cf-2ff9-4178-a689-14514c8b0b37","Type":"ContainerStarted","Data":"165aefcbc30c984ed97f804da9592b1724c57ba3e72fd2eceddde9b110ac6c0d"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.154614 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7k5s4" podStartSLOduration=100.154592036 podStartE2EDuration="1m40.154592036s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:46.145800136 +0000 UTC m=+122.477073388" watchObservedRunningTime="2025-11-24 08:55:46.154592036 +0000 UTC m=+122.485865288" Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.154818 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99"] Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.160588 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh" event={"ID":"be010eff-2ece-4d07-98e1-6c7d593d89b1","Type":"ContainerStarted","Data":"20cd2566e5160d5db86c36a6ca94b7c81ef4911d6e47995564d8e32c0c92b6dd"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.176101 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:46 crc kubenswrapper[4719]: E1124 08:55:46.177805 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:46.677786585 +0000 UTC m=+123.009059837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.215233 4719 generic.go:334] "Generic (PLEG): container finished" podID="15b89539-dfb7-4d1b-9300-e04517c96486" containerID="29eedea98bda46126392518086a965b59d6bb0266283b09566680a72d696e8ba" exitCode=0 Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.215360 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" event={"ID":"15b89539-dfb7-4d1b-9300-e04517c96486","Type":"ContainerDied","Data":"29eedea98bda46126392518086a965b59d6bb0266283b09566680a72d696e8ba"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.246293 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6" event={"ID":"055c444e-b496-401b-b915-e8525733dd35","Type":"ContainerStarted","Data":"5a72a4b2c607e56c26a253502b3634bec50e8783ef737c5158153c3bf58c4ef3"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.251142 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2cmg5" podStartSLOduration=100.251105319 podStartE2EDuration="1m40.251105319s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:46.24023341 +0000 UTC m=+122.571506662" watchObservedRunningTime="2025-11-24 08:55:46.251105319 +0000 UTC m=+122.582378571" Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.270512 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" event={"ID":"85d596cf-88d9-4858-95a2-cfcae776651c","Type":"ContainerStarted","Data":"f4cf27db8b57d250d1d4e53f8e5fd46210a75b0eb2b10ad90de3ffcee8da003e"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.280143 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:46 crc kubenswrapper[4719]: E1124 08:55:46.281711 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:46.781691089 +0000 UTC m=+123.112964341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.359463 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2v6sn"] Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.363601 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s" event={"ID":"510fbd10-427b-48c8-94ba-99f54e2227cc","Type":"ContainerStarted","Data":"a711fe1109e719cea724432423a2191794075516bc768485db4b5f623fbbce45"} Nov 24 08:55:46 crc kubenswrapper[4719]: W1124 08:55:46.371745 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6fd0d0f_2097_474e_a6a9_528cb296457a.slice/crio-164d936b23315d1e35c395effc3b443ac1350c6f0b67e88ae763f8ab9f58b7b7 WatchSource:0}: Error finding container 164d936b23315d1e35c395effc3b443ac1350c6f0b67e88ae763f8ab9f58b7b7: Status 404 returned error can't find the container with id 164d936b23315d1e35c395effc3b443ac1350c6f0b67e88ae763f8ab9f58b7b7 Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.386841 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:46 crc kubenswrapper[4719]: E1124 08:55:46.387515 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:46.887495777 +0000 UTC m=+123.218769029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.399638 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" event={"ID":"70f17021-823b-4f40-b34b-a94a6ab152b9","Type":"ContainerStarted","Data":"3ef492a5dd3e4954b9e4c30cb6e29b26866de97f61b8f98fa0ce2267928d68a8"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.410570 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q"] Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.430448 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jpl9f" event={"ID":"f42a4caa-e790-4ec2-a6fd-28d97cafcf32","Type":"ContainerStarted","Data":"444bb5069d88ec3e8afb270fb68b953a1a35f28895b78d20893ef057afdbc0be"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.449404 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr" event={"ID":"30a0bbd7-f318-46fe-a627-238dab2e710f","Type":"ContainerStarted","Data":"a41c8516892212700cc18a498ff4e8b2ed82435bd73e20baea2aac3d53e05eb1"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.451582 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" event={"ID":"fba87784-a987-4620-b3ce-6ac015bbd4d1","Type":"ContainerStarted","Data":"6dff247b5e3eb94dce876c5ca7606be2d68dc9cc680e4cca9f800150a4d3499a"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.461255 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" event={"ID":"2f8e3ee1-3f11-4e36-8063-d96db6c59a40","Type":"ContainerStarted","Data":"f15d189dea86ff80e9670aca9458ad4411e6a084254171db4e96e7f2de1c1342"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.464846 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" event={"ID":"5205f7bb-b9e5-4481-a789-63071edc127f","Type":"ContainerStarted","Data":"2797498e8b5e839ccee91a679542c2e52b1bc7d1ba96807b90b01fb46252ad9f"} Nov 24 08:55:46 crc kubenswrapper[4719]: W1124 08:55:46.469810 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541155af_19a7_4438_9dc6_700d5ba1e889.slice/crio-684f4866905ba8ce89813cffd5cb417626d3e0897c4096de596d9224df0c5f52 WatchSource:0}: Error finding container 684f4866905ba8ce89813cffd5cb417626d3e0897c4096de596d9224df0c5f52: Status 404 returned error can't find the container with id 684f4866905ba8ce89813cffd5cb417626d3e0897c4096de596d9224df0c5f52 Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.477278 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" event={"ID":"cdf07083-6f82-49a7-9af9-b2d7aec76240","Type":"ContainerStarted","Data":"2056c1f961d1d33c723858ff8f3a4a2e291dc7530c6f8b2a2d8713c3e80e963e"} Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.481735 4719 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-d5d8q container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.481797 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" podUID="cdf07083-6f82-49a7-9af9-b2d7aec76240" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.481908 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.481938 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.482095 4719 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hcbkk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.482113 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" podUID="5ed2027e-eab9-48cc-a501-e6ff6ce80e92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 24 08:55:46 crc kubenswrapper[4719]: W1124 08:55:46.485666 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d21cb73_ee22_43f3_8824_393d3f6335b6.slice/crio-f32b313d4e35a580108772108f3345ff27c0cb89c74daca946d22573dbef8d30 WatchSource:0}: Error finding container f32b313d4e35a580108772108f3345ff27c0cb89c74daca946d22573dbef8d30: Status 404 returned error can't find the container with id f32b313d4e35a580108772108f3345ff27c0cb89c74daca946d22573dbef8d30 Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.488592 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:46 crc kubenswrapper[4719]: E1124 08:55:46.489058 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:46.988999873 +0000 UTC m=+123.320273125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.491426 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:46 crc kubenswrapper[4719]: E1124 08:55:46.496460 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:46.996419614 +0000 UTC m=+123.327693026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.599858 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:46 crc kubenswrapper[4719]: E1124 08:55:46.600253 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:47.100210704 +0000 UTC m=+123.431483956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.600676 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:46 crc kubenswrapper[4719]: E1124 08:55:46.601807 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:47.101797949 +0000 UTC m=+123.433071201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.684111 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zzpsx"] Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.702487 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:46 crc kubenswrapper[4719]: E1124 08:55:46.702854 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:47.202833852 +0000 UTC m=+123.534107104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.713825 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hsdhb"] Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.736168 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gtqd7"] Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.805083 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:46 crc kubenswrapper[4719]: E1124 08:55:46.805664 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:47.305642535 +0000 UTC m=+123.636915797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.834795 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd"] Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.905970 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:46 crc kubenswrapper[4719]: E1124 08:55:46.907019 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:47.406964185 +0000 UTC m=+123.738237437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.907304 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:46 crc kubenswrapper[4719]: E1124 08:55:46.907968 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:47.407911812 +0000 UTC m=+123.739185064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:46 crc kubenswrapper[4719]: I1124 08:55:46.972079 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ftc62"] Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.011974 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:47 crc kubenswrapper[4719]: E1124 08:55:47.012477 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:47.512450714 +0000 UTC m=+123.843723966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.115684 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:47 crc kubenswrapper[4719]: E1124 08:55:47.116144 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:47.616128722 +0000 UTC m=+123.947401974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.216762 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:47 crc kubenswrapper[4719]: E1124 08:55:47.216990 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:47.716951408 +0000 UTC m=+124.048224670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.217139 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:47 crc kubenswrapper[4719]: E1124 08:55:47.217614 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:47.717599646 +0000 UTC m=+124.048872898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:47 crc kubenswrapper[4719]: W1124 08:55:47.264559 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c2dd3a2_e658_4127_a661_0590d998ea1c.slice/crio-47b3c4cdadbf9c6b2727f999e77122140120928a4440bb54e67efd916f936e81 WatchSource:0}: Error finding container 47b3c4cdadbf9c6b2727f999e77122140120928a4440bb54e67efd916f936e81: Status 404 returned error can't find the container with id 47b3c4cdadbf9c6b2727f999e77122140120928a4440bb54e67efd916f936e81 Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.318241 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:47 crc kubenswrapper[4719]: E1124 08:55:47.318630 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:47.818574447 +0000 UTC m=+124.149847709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.319144 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:47 crc kubenswrapper[4719]: E1124 08:55:47.321775 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:47.821706826 +0000 UTC m=+124.152980078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.428289 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:47 crc kubenswrapper[4719]: E1124 08:55:47.428860 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:47.928834702 +0000 UTC m=+124.260107954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.532996 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:47 crc kubenswrapper[4719]: E1124 08:55:47.533751 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:48.033718833 +0000 UTC m=+124.364992085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.587280 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z8p7k" event={"ID":"c299891d-a79d-40cb-bfda-074f6e9ea036","Type":"ContainerStarted","Data":"1371909cef204edb842a96ec6057ec858a6a37435baaf3a4e88099fbc36fddab"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.602085 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qhkxq" event={"ID":"003a05a0-7927-454d-97e6-935ee34279f5","Type":"ContainerStarted","Data":"14d3d8a4fb9d322580821421f99ed83516bed8ec554166cec3b2d0c8219bdb2a"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.618242 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2v6sn" event={"ID":"541155af-19a7-4438-9dc6-700d5ba1e889","Type":"ContainerStarted","Data":"684f4866905ba8ce89813cffd5cb417626d3e0897c4096de596d9224df0c5f52"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.634200 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:47 crc kubenswrapper[4719]: E1124 08:55:47.634429 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:48.134390894 +0000 UTC m=+124.465664146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.635105 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:47 crc kubenswrapper[4719]: E1124 08:55:47.635789 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:48.135770044 +0000 UTC m=+124.467043296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.667958 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s" event={"ID":"510fbd10-427b-48c8-94ba-99f54e2227cc","Type":"ContainerStarted","Data":"ac72229f0af96aa0e4cf5b11883f0da0575d12400b197b521efdb8303a6cc175"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.682814 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" event={"ID":"c0b74b9b-50d6-454d-b527-a5980f7d762e","Type":"ContainerStarted","Data":"3a183ce7b9a37d3ae336dc7b14f14f96643cb6db1d218dd0e1b51f6477638bb0"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.693992 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" event={"ID":"c6fd0d0f-2097-474e-a6a9-528cb296457a","Type":"ContainerStarted","Data":"164d936b23315d1e35c395effc3b443ac1350c6f0b67e88ae763f8ab9f58b7b7"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.730554 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh" event={"ID":"9a0616cf-bdcc-463d-8185-fd49b74cd419","Type":"ContainerStarted","Data":"dae8e73e62eacc935b1382029b39fddef028fe06a47ff0b900a7fabc0da039eb"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.736568 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:47 crc kubenswrapper[4719]: E1124 08:55:47.736787 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:48.236750664 +0000 UTC m=+124.568023916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.736855 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:47 crc kubenswrapper[4719]: E1124 08:55:47.737480 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:48.237461175 +0000 UTC m=+124.568734427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.752069 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jpl9f" event={"ID":"f42a4caa-e790-4ec2-a6fd-28d97cafcf32","Type":"ContainerStarted","Data":"f770299d7f286d3a4191c8635645fe2fed5708fdd77157d483bf3b5f6f8f1684"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.802215 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" event={"ID":"85d596cf-88d9-4858-95a2-cfcae776651c","Type":"ContainerStarted","Data":"1a5f0229fac35c64d62b5ee0e18ddb7820f9ddc3bad200468fa5f72948772a3a"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.839274 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" event={"ID":"613468f4-6a02-4828-8873-01bccb4b2c43","Type":"ContainerStarted","Data":"11c616334453840fb1b9faf429c50b2b3c2d09955040d80319b9ef50907fb312"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.839923 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:47 crc kubenswrapper[4719]: E1124 08:55:47.841647 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:48.341620906 +0000 UTC m=+124.672894158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.845979 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7zwxh" podStartSLOduration=101.845958609 podStartE2EDuration="1m41.845958609s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:47.765718698 +0000 UTC m=+124.096991960" watchObservedRunningTime="2025-11-24 08:55:47.845958609 +0000 UTC m=+124.177231861" Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.854766 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" event={"ID":"dd37c67b-4f85-4ae8-b9ad-27d63aadca79","Type":"ContainerStarted","Data":"7512cc5667b4e42db437b23afc46c7511fd3519568fbeaf99c03dc7b661400b0"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.856977 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wbvqt" event={"ID":"0abda2dc-f505-4af1-be2e-fb5b3765bb23","Type":"ContainerStarted","Data":"568fa0a2bc084acec1eab540d93485b6323133c7fb7f1adf011c47c21da12fb5"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.868613 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" event={"ID":"732e3b35-79a1-47d8-bc13-44ddffb8de36","Type":"ContainerStarted","Data":"86511a00965f76d81a3113cfcf322ca058f491937a4945c8e12e1513ebdfdd5b"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.887738 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfhtd" event={"ID":"6bf07625-221a-4cb4-9fe2-520e8f0ee115","Type":"ContainerStarted","Data":"cfbebd0a51c4c327cb64a2d43fd1f99d742ecc1b8fd05a97653c22008481f6e6"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.902994 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jpl9f" podStartSLOduration=101.90296291 podStartE2EDuration="1m41.90296291s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:47.845337391 +0000 UTC m=+124.176610663" watchObservedRunningTime="2025-11-24 08:55:47.90296291 +0000 UTC m=+124.234236162" Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.943458 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:47 crc kubenswrapper[4719]: E1124 08:55:47.944013 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:48.443996056 +0000 UTC m=+124.775269308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.944894 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr" event={"ID":"30a0bbd7-f318-46fe-a627-238dab2e710f","Type":"ContainerStarted","Data":"4126dbe256aca335480686d7cf174c4d9966ea3011215f9eab3929e3e39cb518"} Nov 24 08:55:47 crc kubenswrapper[4719]: I1124 08:55:47.967415 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4887s" event={"ID":"8389675e-5e4d-40d2-a5c8-b3e3587bf67e","Type":"ContainerStarted","Data":"b465f40b1a6e33352e60698b2b66c4feae7566888430bb6ac59fc04ead95d0a0"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.008391 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" event={"ID":"70f17021-823b-4f40-b34b-a94a6ab152b9","Type":"ContainerStarted","Data":"5c4d23679cfdd46281733ef8bd1dc23528bff5b3f0f2ce9b3a9f3a7c0ec6ae72"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.009841 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-d8r69" podStartSLOduration=102.009822838 podStartE2EDuration="1m42.009822838s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:47.902246259 +0000 UTC m=+124.233519531" watchObservedRunningTime="2025-11-24 08:55:48.009822838 +0000 UTC m=+124.341096090" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.010618 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mn2gk" event={"ID":"d97493d9-bce3-4ee4-9e4b-5382442ad977","Type":"ContainerStarted","Data":"1740ddb527fa291722975ed65e94d289c3f8ddfc106b6d008453b0a0623463ed"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.011604 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.025014 4719 patch_prober.go:28] interesting pod/console-operator-58897d9998-mn2gk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.025110 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mn2gk" podUID="d97493d9-bce3-4ee4-9e4b-5382442ad977" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.050473 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:48 crc kubenswrapper[4719]: E1124 08:55:48.052694 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:48.552638255 +0000 UTC m=+124.883911507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.067516 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8fxzr" podStartSLOduration=102.067486087 podStartE2EDuration="1m42.067486087s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:48.007242334 +0000 UTC m=+124.338515596" watchObservedRunningTime="2025-11-24 08:55:48.067486087 +0000 UTC m=+124.398759359" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.068561 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-4887s" podStartSLOduration=102.068547417 podStartE2EDuration="1m42.068547417s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:48.066906221 +0000 UTC m=+124.398179483" watchObservedRunningTime="2025-11-24 08:55:48.068547417 +0000 UTC m=+124.399820689" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.075878 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9" event={"ID":"5ca69dfc-1cff-4287-81e4-d6aa55d77dcd","Type":"ContainerStarted","Data":"4d723b203437e56da68cb33460e026ca2247b81247fa76c9a76744fc9418aec9"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.085396 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.086945 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.087029 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.109563 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" event={"ID":"8d21cb73-ee22-43f3-8824-393d3f6335b6","Type":"ContainerStarted","Data":"f32b313d4e35a580108772108f3345ff27c0cb89c74daca946d22573dbef8d30"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.112215 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.112378 4719 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xzgz5 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.112430 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" podUID="8d21cb73-ee22-43f3-8824-393d3f6335b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.156148 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:48 crc kubenswrapper[4719]: E1124 08:55:48.158616 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:48.658590137 +0000 UTC m=+124.989863579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.176901 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh" event={"ID":"be010eff-2ece-4d07-98e1-6c7d593d89b1","Type":"ContainerStarted","Data":"c89ea7d0da7c9c9b37e1f70de4d9c0bd0269688c4f4c5ac74055ed201bd17d24"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.205164 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-6qn99" podStartSLOduration=102.20513541 podStartE2EDuration="1m42.20513541s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:48.122960734 +0000 UTC m=+124.454234006" watchObservedRunningTime="2025-11-24 08:55:48.20513541 +0000 UTC m=+124.536408682" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.206471 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mn2gk" podStartSLOduration=103.206464428 podStartE2EDuration="1m43.206464428s" podCreationTimestamp="2025-11-24 08:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:48.204458991 +0000 UTC m=+124.535732273" watchObservedRunningTime="2025-11-24 08:55:48.206464428 +0000 UTC m=+124.537737680" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.216576 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hsdhb" event={"ID":"ee3d18bd-4007-4fac-952d-528cb25a90dd","Type":"ContainerStarted","Data":"d9c5ee55a5fc0bdf73b14b7222f7c24ee81c0636585538f2208618e9bb86f9ba"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.256841 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" event={"ID":"6818985a-ffd6-4447-bafe-624296df6660","Type":"ContainerStarted","Data":"8004a49debda73b8f5c6bc495014824a1e471c97c5eaa35825f6e2e2e1caaab4"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.258201 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.260029 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z58r9" podStartSLOduration=102.26000088 podStartE2EDuration="1m42.26000088s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:48.256693906 +0000 UTC m=+124.587967168" watchObservedRunningTime="2025-11-24 08:55:48.26000088 +0000 UTC m=+124.591274132" Nov 24 08:55:48 crc kubenswrapper[4719]: E1124 08:55:48.260467 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:48.760442983 +0000 UTC m=+125.091716235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.318053 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" event={"ID":"bc33bec0-dcdd-4cb1-a872-5ad29dc1afbc","Type":"ContainerStarted","Data":"401278be5449f0982bb91c38ef6a5e89d06c6c99205ccdba4ca8fb59f8fa7751"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.348720 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" event={"ID":"6fd95d6b-226e-4eef-a232-85205a89d877","Type":"ContainerStarted","Data":"5e942042c16513ef028e14e86606debd5c4c966aef1a8f31d94d5c8a0d57b78f"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.360554 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:48 crc kubenswrapper[4719]: E1124 08:55:48.361080 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:48.861011702 +0000 UTC m=+125.192284954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.381775 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x4djh" podStartSLOduration=103.381742391 podStartE2EDuration="1m43.381742391s" podCreationTimestamp="2025-11-24 08:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:48.351344797 +0000 UTC m=+124.682618059" watchObservedRunningTime="2025-11-24 08:55:48.381742391 +0000 UTC m=+124.713015643" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.398564 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" event={"ID":"5205f7bb-b9e5-4481-a789-63071edc127f","Type":"ContainerStarted","Data":"39263347f3dad029bde5a51cd54f236bb82190dd4240a8097edc24e47c9d0030"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.423606 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" podStartSLOduration=102.42357392 podStartE2EDuration="1m42.42357392s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:48.420703309 +0000 UTC m=+124.751976571" watchObservedRunningTime="2025-11-24 08:55:48.42357392 +0000 UTC m=+124.754847182" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.426746 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" event={"ID":"76540cf5-0cd5-4282-b3c3-dd12105f0d4e","Type":"ContainerStarted","Data":"6ff5eb1f5bb61c9bae7d66a488056515df664553e848c96b5e054b8eeb8a30e6"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.502161 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:48 crc kubenswrapper[4719]: E1124 08:55:48.503094 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:49.00306212 +0000 UTC m=+125.334335382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.510425 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zzg" podStartSLOduration=103.510377088 podStartE2EDuration="1m43.510377088s" podCreationTimestamp="2025-11-24 08:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:48.503934065 +0000 UTC m=+124.835207327" watchObservedRunningTime="2025-11-24 08:55:48.510377088 +0000 UTC m=+124.841650340" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.519334 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftc62" event={"ID":"7c2dd3a2-e658-4127-a661-0590d998ea1c","Type":"ContainerStarted","Data":"47b3c4cdadbf9c6b2727f999e77122140120928a4440bb54e67efd916f936e81"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.582202 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" event={"ID":"a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6","Type":"ContainerStarted","Data":"512bde991ca73948d930cfe52844996d8fc805b0c1da8f6335b093b2bbdca9bc"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.596383 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" event={"ID":"15b89539-dfb7-4d1b-9300-e04517c96486","Type":"ContainerStarted","Data":"16aaad494d9de86753135a25e4d8947668906e85ea938a499d6a4c00dba1159a"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.597289 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.608082 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:48 crc kubenswrapper[4719]: E1124 08:55:48.608628 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:49.108608811 +0000 UTC m=+125.439882063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.628835 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" podStartSLOduration=102.628792784 podStartE2EDuration="1m42.628792784s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:48.626770487 +0000 UTC m=+124.958043749" watchObservedRunningTime="2025-11-24 08:55:48.628792784 +0000 UTC m=+124.960066036" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.656754 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" event={"ID":"2f8e3ee1-3f11-4e36-8063-d96db6c59a40","Type":"ContainerStarted","Data":"e3727b40df2ac7f601e8f9c1d218375057a9d0aaf55c3747c4a574e18551fbd0"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.667156 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6" event={"ID":"055c444e-b496-401b-b915-e8525733dd35","Type":"ContainerStarted","Data":"481b4316d3b13e9eed66b30c95b699bdbfe09fb01a703328458db36e49c34e97"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.669874 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" podStartSLOduration=103.669847232 podStartE2EDuration="1m43.669847232s" podCreationTimestamp="2025-11-24 08:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:48.665606611 +0000 UTC m=+124.996879873" watchObservedRunningTime="2025-11-24 08:55:48.669847232 +0000 UTC m=+125.001120484" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.696265 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4qkwc" event={"ID":"d321c6c1-3a71-42d1-a7e0-96dec2c02fb3","Type":"ContainerStarted","Data":"24f31feabe1c957708a29039232d787641073c69718c535f376068d12715c7ad"} Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.704515 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.708850 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:48 crc kubenswrapper[4719]: E1124 08:55:48.710614 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:49.21058123 +0000 UTC m=+125.541854652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.711194 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.742505 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wtgd6" podStartSLOduration=102.742468366 podStartE2EDuration="1m42.742468366s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:48.741027085 +0000 UTC m=+125.072300367" watchObservedRunningTime="2025-11-24 08:55:48.742468366 +0000 UTC m=+125.073741618" Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.813239 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:48 crc kubenswrapper[4719]: E1124 08:55:48.820951 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:49.320931177 +0000 UTC m=+125.652204429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:48 crc kubenswrapper[4719]: I1124 08:55:48.922937 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:48 crc kubenswrapper[4719]: E1124 08:55:48.923589 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:49.423564865 +0000 UTC m=+125.754838117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.026214 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:49 crc kubenswrapper[4719]: E1124 08:55:49.027303 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:49.527283603 +0000 UTC m=+125.858556855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.092841 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.092899 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.129678 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:49 crc kubenswrapper[4719]: E1124 08:55:49.130253 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:49.63022773 +0000 UTC m=+125.961500982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.185005 4719 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-52tkz container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.185094 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" podUID="15b89539-dfb7-4d1b-9300-e04517c96486" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.185138 4719 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-52tkz container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.185213 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" podUID="15b89539-dfb7-4d1b-9300-e04517c96486" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.233361 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:49 crc kubenswrapper[4719]: E1124 08:55:49.233890 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:49.733869836 +0000 UTC m=+126.065143098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.334817 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:49 crc kubenswrapper[4719]: E1124 08:55:49.335150 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:49.835102974 +0000 UTC m=+126.166376226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.335488 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:49 crc kubenswrapper[4719]: E1124 08:55:49.336024 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:49.83601777 +0000 UTC m=+126.167291022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.437142 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:49 crc kubenswrapper[4719]: E1124 08:55:49.437719 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:49.937695161 +0000 UTC m=+126.268968403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.539860 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:49 crc kubenswrapper[4719]: E1124 08:55:49.540484 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:50.040458893 +0000 UTC m=+126.371732145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.641330 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:49 crc kubenswrapper[4719]: E1124 08:55:49.641469 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:50.141423873 +0000 UTC m=+126.472697125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.641933 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:49 crc kubenswrapper[4719]: E1124 08:55:49.642701 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:50.142679429 +0000 UTC m=+126.473952681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.704789 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" event={"ID":"5205f7bb-b9e5-4481-a789-63071edc127f","Type":"ContainerStarted","Data":"54958b4a0285058f0b7c5fadacc594ea41c342469907c7bbe2f236f7effaa7a2"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.708110 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hsdhb" event={"ID":"ee3d18bd-4007-4fac-952d-528cb25a90dd","Type":"ContainerStarted","Data":"622eb834bea3a27d28532dfe8ed34e0cbda9d719ba2fd30ec73ea67ab7e025e2"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.713643 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4qkwc" event={"ID":"d321c6c1-3a71-42d1-a7e0-96dec2c02fb3","Type":"ContainerStarted","Data":"aa9ce07dca07f0e0ec0e9e7522bb13f1b3362481a17a19ddc3f978421531edb6"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.716310 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfhtd" event={"ID":"6bf07625-221a-4cb4-9fe2-520e8f0ee115","Type":"ContainerStarted","Data":"4e3f60648e647f37590928bacc34de24970a59cc22d98715565b6ef59e9fab55"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.718732 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" event={"ID":"8d21cb73-ee22-43f3-8824-393d3f6335b6","Type":"ContainerStarted","Data":"eaef3bc178ceb12e3b60a3cd680149c1080ab5b1e7fd5a604f89fe6bd7bf713c"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.719706 4719 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xzgz5 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.719774 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" podUID="8d21cb73-ee22-43f3-8824-393d3f6335b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.722251 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" event={"ID":"613468f4-6a02-4828-8873-01bccb4b2c43","Type":"ContainerStarted","Data":"c164ae6a5e4cf5219ee0472344f8966e6aabad735aae339db4129be399c364bc"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.725022 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" event={"ID":"c6fd0d0f-2097-474e-a6a9-528cb296457a","Type":"ContainerStarted","Data":"c1c3745f37fa18e12f3a6373c3f20921b163ce373a59da8373d5e663fd75bfbd"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.725339 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.727081 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wbvqt" event={"ID":"0abda2dc-f505-4af1-be2e-fb5b3765bb23","Type":"ContainerStarted","Data":"e469976c9e3beb10c7c28959967ee9528d4e187911688db26e45d3860c75ade4"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.727644 4719 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-gjq99 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.727692 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" podUID="c6fd0d0f-2097-474e-a6a9-528cb296457a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.729339 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" event={"ID":"76540cf5-0cd5-4282-b3c3-dd12105f0d4e","Type":"ContainerStarted","Data":"dae3abdbd7a485cd3dd730b07284c8da8c6a855d0cdaae80eb0b1d52da596dc5"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.730245 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.731567 4719 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gtqd7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.731627 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" podUID="76540cf5-0cd5-4282-b3c3-dd12105f0d4e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.738127 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vzgbp" event={"ID":"a71a5b9e-f1e1-4eaf-a71f-22e47b9a5bb6","Type":"ContainerStarted","Data":"c1777cb5db170c6e491e1d95325ae3dca5f5f980dca325b9e2d2639c4a8ae2aa"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.741657 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftc62" event={"ID":"7c2dd3a2-e658-4127-a661-0590d998ea1c","Type":"ContainerStarted","Data":"69aec8b981bf6b17d093bcffb4f3ebd8983f0aa7a7a0e9fdcc91c34d8a71074a"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.743343 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:49 crc kubenswrapper[4719]: E1124 08:55:49.743865 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:50.243838934 +0000 UTC m=+126.575112186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.748593 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qhkxq" event={"ID":"003a05a0-7927-454d-97e6-935ee34279f5","Type":"ContainerStarted","Data":"3fb9dae87cbb4e7b8f4c79bd0e02130a3983ce10dee6a00ed2fce2e71588adc9"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.750535 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z8p7k" event={"ID":"c299891d-a79d-40cb-bfda-074f6e9ea036","Type":"ContainerStarted","Data":"d4ceabc73eb35c256a4f6dc83ed31587af1c8d0874874a7719489fe24d3c1318"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.751795 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mn2gk" event={"ID":"d97493d9-bce3-4ee4-9e4b-5382442ad977","Type":"ContainerStarted","Data":"62ce781e6d50968baaadc8209977ee096e6281752d527a59126b8ac66564a5ea"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.753332 4719 patch_prober.go:28] interesting pod/console-operator-58897d9998-mn2gk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.753383 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mn2gk" podUID="d97493d9-bce3-4ee4-9e4b-5382442ad977" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.759166 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" event={"ID":"6fd95d6b-226e-4eef-a232-85205a89d877","Type":"ContainerStarted","Data":"0c8c27432b0a779f92c4db3a93fed33fc33301f5d8d04de08a07f7a30fc746a8"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.760895 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" event={"ID":"dd37c67b-4f85-4ae8-b9ad-27d63aadca79","Type":"ContainerStarted","Data":"760089923a9b6885d73de40b880bddc6eb5dfeaed225b7b8896c1ed9280defdb"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.762028 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.763190 4719 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ctxrd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.763248 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" podUID="dd37c67b-4f85-4ae8-b9ad-27d63aadca79" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.775982 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" event={"ID":"5c449dd1-4e36-4f64-8d34-ec281a84f870","Type":"ContainerStarted","Data":"a56b35e68f457f46d8af0d099e3cb2d3b7eed11ac06f40af280cb450ae872743"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.789061 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" event={"ID":"fba87784-a987-4620-b3ce-6ac015bbd4d1","Type":"ContainerStarted","Data":"95a11b80d0e061e5411dc63faaeec7c87a74fe75a75f830d7f99540ed7972c88"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.820171 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2v6sn" event={"ID":"541155af-19a7-4438-9dc6-700d5ba1e889","Type":"ContainerStarted","Data":"f498c859c4ffc2cd57c803489711a2b1c818600c97dfbb32ded4260de14055e5"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.834951 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s" event={"ID":"510fbd10-427b-48c8-94ba-99f54e2227cc","Type":"ContainerStarted","Data":"6a5d65f3e2121c0725063ab76e29247dcbd216eef8d5868dae6a0c4ff5537b58"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.836095 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.848945 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" event={"ID":"2f8e3ee1-3f11-4e36-8063-d96db6c59a40","Type":"ContainerStarted","Data":"624414f881a326a9998d2b9d7201337cd7969722aff4144c268498758d88e144"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.852105 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:49 crc kubenswrapper[4719]: E1124 08:55:49.855371 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:50.355348265 +0000 UTC m=+126.686621727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.883874 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" event={"ID":"c0b74b9b-50d6-454d-b527-a5980f7d762e","Type":"ContainerStarted","Data":"1b59ee9d23e52510a03a492c3244794091c9706755f09a92fb9d69b40d78ef10"} Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.889865 4719 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-52tkz container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.889937 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" podUID="15b89539-dfb7-4d1b-9300-e04517c96486" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.932930 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-n6v6j" podStartSLOduration=103.932894529 podStartE2EDuration="1m43.932894529s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:49.77289322 +0000 UTC m=+126.104166472" watchObservedRunningTime="2025-11-24 08:55:49.932894529 +0000 UTC m=+126.264167781" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.943839 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" podStartSLOduration=103.94380901 podStartE2EDuration="1m43.94380901s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:49.880477649 +0000 UTC m=+126.211750931" watchObservedRunningTime="2025-11-24 08:55:49.94380901 +0000 UTC m=+126.275082262" Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.960945 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:49 crc kubenswrapper[4719]: E1124 08:55:49.962169 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:50.462151311 +0000 UTC m=+126.793424563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:49 crc kubenswrapper[4719]: I1124 08:55:49.989201 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hsdhb" podStartSLOduration=102.989180659 podStartE2EDuration="1m42.989180659s" podCreationTimestamp="2025-11-24 08:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:49.988593073 +0000 UTC m=+126.319866325" watchObservedRunningTime="2025-11-24 08:55:49.989180659 +0000 UTC m=+126.320453911" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.052553 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jkf8p" podStartSLOduration=104.05252599 podStartE2EDuration="1m44.05252599s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:50.048604849 +0000 UTC m=+126.379878101" watchObservedRunningTime="2025-11-24 08:55:50.05252599 +0000 UTC m=+126.383799242" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.065745 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:50 crc kubenswrapper[4719]: E1124 08:55:50.066509 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:50.566476417 +0000 UTC m=+126.897749849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.089539 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.089610 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.147921 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pfhtd" podStartSLOduration=104.147892522 podStartE2EDuration="1m44.147892522s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:50.102586073 +0000 UTC m=+126.433859325" watchObservedRunningTime="2025-11-24 08:55:50.147892522 +0000 UTC m=+126.479165784" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.167846 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:50 crc kubenswrapper[4719]: E1124 08:55:50.168387 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:50.668366744 +0000 UTC m=+126.999639996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.176983 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wbvqt" podStartSLOduration=10.176962118 podStartE2EDuration="10.176962118s" podCreationTimestamp="2025-11-24 08:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:50.149196539 +0000 UTC m=+126.480469801" watchObservedRunningTime="2025-11-24 08:55:50.176962118 +0000 UTC m=+126.508235370" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.198708 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" podStartSLOduration=104.198683755 podStartE2EDuration="1m44.198683755s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:50.179313655 +0000 UTC m=+126.510586917" watchObservedRunningTime="2025-11-24 08:55:50.198683755 +0000 UTC m=+126.529957007" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.199961 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" podStartSLOduration=104.199951171 podStartE2EDuration="1m44.199951171s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:50.195762812 +0000 UTC m=+126.527036074" watchObservedRunningTime="2025-11-24 08:55:50.199951171 +0000 UTC m=+126.531224433" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.220291 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ftc62" podStartSLOduration=104.220268699 podStartE2EDuration="1m44.220268699s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:50.21714038 +0000 UTC m=+126.548413642" watchObservedRunningTime="2025-11-24 08:55:50.220268699 +0000 UTC m=+126.551541951" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.270022 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:50 crc kubenswrapper[4719]: E1124 08:55:50.270494 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:50.770479517 +0000 UTC m=+127.101752769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.283465 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-2v6sn" podStartSLOduration=9.283442035 podStartE2EDuration="9.283442035s" podCreationTimestamp="2025-11-24 08:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:50.28255816 +0000 UTC m=+126.613831422" watchObservedRunningTime="2025-11-24 08:55:50.283442035 +0000 UTC m=+126.614715277" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.285298 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" podStartSLOduration=104.285289038 podStartE2EDuration="1m44.285289038s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:50.266907935 +0000 UTC m=+126.598181187" watchObservedRunningTime="2025-11-24 08:55:50.285289038 +0000 UTC m=+126.616562300" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.357979 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s" podStartSLOduration=104.357957013 podStartE2EDuration="1m44.357957013s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:50.357095869 +0000 UTC m=+126.688369151" watchObservedRunningTime="2025-11-24 08:55:50.357957013 +0000 UTC m=+126.689230265" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.358944 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-m6fbn" podStartSLOduration=104.358938441 podStartE2EDuration="1m44.358938441s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:50.311595075 +0000 UTC m=+126.642868347" watchObservedRunningTime="2025-11-24 08:55:50.358938441 +0000 UTC m=+126.690211693" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.371519 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:50 crc kubenswrapper[4719]: E1124 08:55:50.372072 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:50.872022973 +0000 UTC m=+127.203296225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.447855 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-k74l4" podStartSLOduration=104.447830108 podStartE2EDuration="1m44.447830108s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:50.444991858 +0000 UTC m=+126.776265140" watchObservedRunningTime="2025-11-24 08:55:50.447830108 +0000 UTC m=+126.779103360" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.448549 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" podStartSLOduration=105.448543549 podStartE2EDuration="1m45.448543549s" podCreationTimestamp="2025-11-24 08:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:50.398186997 +0000 UTC m=+126.729460269" watchObservedRunningTime="2025-11-24 08:55:50.448543549 +0000 UTC m=+126.779816801" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.476313 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:50 crc kubenswrapper[4719]: E1124 08:55:50.476871 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:50.976853584 +0000 UTC m=+127.308126836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.577845 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:50 crc kubenswrapper[4719]: E1124 08:55:50.578250 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:51.078227096 +0000 UTC m=+127.409500348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.679367 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:50 crc kubenswrapper[4719]: E1124 08:55:50.679881 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:51.179858415 +0000 UTC m=+127.511131667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.781125 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:50 crc kubenswrapper[4719]: E1124 08:55:50.781433 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:51.281387661 +0000 UTC m=+127.612660923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.781649 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:50 crc kubenswrapper[4719]: E1124 08:55:50.782149 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:51.2820311 +0000 UTC m=+127.613402874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.882717 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:50 crc kubenswrapper[4719]: E1124 08:55:50.882989 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:51.382941798 +0000 UTC m=+127.714215070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.883230 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:50 crc kubenswrapper[4719]: E1124 08:55:50.883723 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:51.38370125 +0000 UTC m=+127.714974682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.893185 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-qhkxq" event={"ID":"003a05a0-7927-454d-97e6-935ee34279f5","Type":"ContainerStarted","Data":"f3a9e7ac9109105bc27f7f1fd453366ebb1cb0d425cc6148b35cc94a549723a4"} Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.895829 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z8p7k" event={"ID":"c299891d-a79d-40cb-bfda-074f6e9ea036","Type":"ContainerStarted","Data":"1c7127390511694f11f7f35b0decc58700b5f206634b7a47f3c414b4399b2499"} Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.896274 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z8p7k" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.898095 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" event={"ID":"6818985a-ffd6-4447-bafe-624296df6660","Type":"ContainerStarted","Data":"16dceb70b312736bb87597cad8d04091c4899c682fde17d95a2c09ba2d389f65"} Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.898947 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.899874 4719 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-g48p5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" start-of-body= Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.899922 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" podUID="6818985a-ffd6-4447-bafe-624296df6660" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.904140 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4qkwc" event={"ID":"d321c6c1-3a71-42d1-a7e0-96dec2c02fb3","Type":"ContainerStarted","Data":"a052cae15fa7cae08e62b30e5d54acf2ffea444aa52a2a80fc5f33555b749cc2"} Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.905024 4719 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gtqd7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.905101 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" podUID="76540cf5-0cd5-4282-b3c3-dd12105f0d4e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.906705 4719 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-gjq99 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.906789 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" podUID="c6fd0d0f-2097-474e-a6a9-528cb296457a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.906994 4719 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ctxrd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.907028 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" podUID="dd37c67b-4f85-4ae8-b9ad-27d63aadca79" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.907711 4719 patch_prober.go:28] interesting pod/console-operator-58897d9998-mn2gk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.907741 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mn2gk" podUID="d97493d9-bce3-4ee4-9e4b-5382442ad977" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.907830 4719 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-52tkz container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.907848 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" podUID="15b89539-dfb7-4d1b-9300-e04517c96486" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.909776 4719 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xzgz5 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.909827 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" podUID="8d21cb73-ee22-43f3-8824-393d3f6335b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.969487 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-qhkxq" podStartSLOduration=104.969451488 podStartE2EDuration="1m44.969451488s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:50.945992041 +0000 UTC m=+127.277265293" watchObservedRunningTime="2025-11-24 08:55:50.969451488 +0000 UTC m=+127.300724740" Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.984990 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:50 crc kubenswrapper[4719]: E1124 08:55:50.985254 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:51.485215186 +0000 UTC m=+127.816488438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:50 crc kubenswrapper[4719]: I1124 08:55:50.986098 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:50 crc kubenswrapper[4719]: E1124 08:55:50.988191 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:51.48817262 +0000 UTC m=+127.819446092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.087141 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:51 crc kubenswrapper[4719]: E1124 08:55:51.089759 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:51.589726667 +0000 UTC m=+127.920999919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.091872 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:55:51 crc kubenswrapper[4719]: [-]has-synced failed: reason withheld Nov 24 08:55:51 crc kubenswrapper[4719]: [+]process-running ok Nov 24 08:55:51 crc kubenswrapper[4719]: healthz check failed Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.091971 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.135152 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" podStartSLOduration=105.135131838 podStartE2EDuration="1m45.135131838s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:51.102745307 +0000 UTC m=+127.434018599" watchObservedRunningTime="2025-11-24 08:55:51.135131838 +0000 UTC m=+127.466405090" Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.190458 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:51 crc kubenswrapper[4719]: E1124 08:55:51.191021 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:51.691003246 +0000 UTC m=+128.022276498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.202156 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" podStartSLOduration=105.202130783 podStartE2EDuration="1m45.202130783s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:51.192747566 +0000 UTC m=+127.524020818" watchObservedRunningTime="2025-11-24 08:55:51.202130783 +0000 UTC m=+127.533404035" Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.291307 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:51 crc kubenswrapper[4719]: E1124 08:55:51.291834 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:51.791810271 +0000 UTC m=+128.123083533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.335369 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-z8p7k" podStartSLOduration=11.335341819 podStartE2EDuration="11.335341819s" podCreationTimestamp="2025-11-24 08:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:51.265693329 +0000 UTC m=+127.596966591" watchObservedRunningTime="2025-11-24 08:55:51.335341819 +0000 UTC m=+127.666615061" Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.393247 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:51 crc kubenswrapper[4719]: E1124 08:55:51.393772 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:51.89375648 +0000 UTC m=+128.225029732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.494942 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:51 crc kubenswrapper[4719]: E1124 08:55:51.495453 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:51.99542414 +0000 UTC m=+128.326697392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.596970 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:51 crc kubenswrapper[4719]: E1124 08:55:51.597443 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:52.09742403 +0000 UTC m=+128.428697282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.697983 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:51 crc kubenswrapper[4719]: E1124 08:55:51.698283 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:52.198242366 +0000 UTC m=+128.529515628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.698457 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:51 crc kubenswrapper[4719]: E1124 08:55:51.698898 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:52.198885774 +0000 UTC m=+128.530159026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.799464 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:51 crc kubenswrapper[4719]: E1124 08:55:51.800124 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:52.300101362 +0000 UTC m=+128.631374614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.901829 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:51 crc kubenswrapper[4719]: E1124 08:55:51.902327 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:52.402310967 +0000 UTC m=+128.733584219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.910284 4719 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-g48p5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" start-of-body= Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.910408 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" podUID="6818985a-ffd6-4447-bafe-624296df6660" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.914196 4719 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gtqd7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.914246 4719 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ctxrd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.914263 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" podUID="76540cf5-0cd5-4282-b3c3-dd12105f0d4e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Nov 24 08:55:51 crc kubenswrapper[4719]: I1124 08:55:51.914328 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" podUID="dd37c67b-4f85-4ae8-b9ad-27d63aadca79" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.002973 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:52 crc kubenswrapper[4719]: E1124 08:55:52.003290 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:52.503241617 +0000 UTC m=+128.834514879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.003686 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:52 crc kubenswrapper[4719]: E1124 08:55:52.006265 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:52.506236172 +0000 UTC m=+128.837509614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.088556 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:55:52 crc kubenswrapper[4719]: [-]has-synced failed: reason withheld Nov 24 08:55:52 crc kubenswrapper[4719]: [+]process-running ok Nov 24 08:55:52 crc kubenswrapper[4719]: healthz check failed Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.088640 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.106128 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:52 crc kubenswrapper[4719]: E1124 08:55:52.106674 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:52.606651767 +0000 UTC m=+128.937925019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.208450 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:52 crc kubenswrapper[4719]: E1124 08:55:52.209263 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:52.709238923 +0000 UTC m=+129.040512175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.310488 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:52 crc kubenswrapper[4719]: E1124 08:55:52.311143 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:52.811111899 +0000 UTC m=+129.142385151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.421064 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:52 crc kubenswrapper[4719]: E1124 08:55:52.421780 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:52.921765595 +0000 UTC m=+129.253038847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.522895 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:52 crc kubenswrapper[4719]: E1124 08:55:52.523366 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:53.023342573 +0000 UTC m=+129.354615825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.624851 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:52 crc kubenswrapper[4719]: E1124 08:55:52.625643 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:53.125607101 +0000 UTC m=+129.456880533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.726348 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:52 crc kubenswrapper[4719]: E1124 08:55:52.726630 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:53.226584361 +0000 UTC m=+129.557857623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.727255 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:52 crc kubenswrapper[4719]: E1124 08:55:52.727766 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:53.227746845 +0000 UTC m=+129.559020097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.826171 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.826298 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.828130 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:52 crc kubenswrapper[4719]: E1124 08:55:52.828480 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:53.328448867 +0000 UTC m=+129.659722269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.829701 4719 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fr4v7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.829776 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" podUID="6fd95d6b-226e-4eef-a232-85205a89d877" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.924748 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" event={"ID":"732e3b35-79a1-47d8-bc13-44ddffb8de36","Type":"ContainerStarted","Data":"9fa5a02d15abc88fcb9dc9e61d2dc24ed487f580547326cf3c014aadd9996c1d"} Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.931286 4719 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-g48p5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" start-of-body= Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.931370 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" podUID="6818985a-ffd6-4447-bafe-624296df6660" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.931894 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.932899 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.934315 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:52 crc kubenswrapper[4719]: E1124 08:55:52.934941 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:53.434916994 +0000 UTC m=+129.766190436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.938061 4719 patch_prober.go:28] interesting pod/console-f9d7485db-l4lt5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 24 08:55:52 crc kubenswrapper[4719]: I1124 08:55:52.938292 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-l4lt5" podUID="0437d205-eb04-4136-a158-01d8729c335c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.004498 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.004580 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.004508 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.004960 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.026925 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.027317 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.039912 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:53 crc kubenswrapper[4719]: E1124 08:55:53.042931 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:53.542898824 +0000 UTC m=+129.874172256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.090838 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:55:53 crc kubenswrapper[4719]: [-]has-synced failed: reason withheld Nov 24 08:55:53 crc kubenswrapper[4719]: [+]process-running ok Nov 24 08:55:53 crc kubenswrapper[4719]: healthz check failed Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.091349 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.142984 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:53 crc kubenswrapper[4719]: E1124 08:55:53.143669 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:53.643642808 +0000 UTC m=+129.974916250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.184469 4719 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-52tkz container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.184855 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" podUID="15b89539-dfb7-4d1b-9300-e04517c96486" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.185422 4719 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-52tkz container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.185547 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" podUID="15b89539-dfb7-4d1b-9300-e04517c96486" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.251230 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:53 crc kubenswrapper[4719]: E1124 08:55:53.251601 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:53.751559996 +0000 UTC m=+130.082833248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.251968 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:53 crc kubenswrapper[4719]: E1124 08:55:53.252519 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:53.752504083 +0000 UTC m=+130.083777335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.353680 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:53 crc kubenswrapper[4719]: E1124 08:55:53.354018 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:53.854000449 +0000 UTC m=+130.185273701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.354471 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:53 crc kubenswrapper[4719]: E1124 08:55:53.354948 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:53.854941325 +0000 UTC m=+130.186214577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.455400 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:53 crc kubenswrapper[4719]: E1124 08:55:53.456060 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:53.956022139 +0000 UTC m=+130.287295391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.557204 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:53 crc kubenswrapper[4719]: E1124 08:55:53.557750 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:54.057735341 +0000 UTC m=+130.389008593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.658708 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:53 crc kubenswrapper[4719]: E1124 08:55:53.659604 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:54.159578986 +0000 UTC m=+130.490852238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.676157 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-4qkwc" podStartSLOduration=107.676128356 podStartE2EDuration="1m47.676128356s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:51.336537573 +0000 UTC m=+127.667810835" watchObservedRunningTime="2025-11-24 08:55:53.676128356 +0000 UTC m=+130.007401608" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.682863 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.684010 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.690826 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.691121 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.712587 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.760858 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:53 crc kubenswrapper[4719]: E1124 08:55:53.762068 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:54.262052289 +0000 UTC m=+130.593325541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.863851 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.864443 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76df8267-a4e8-4b23-8d9d-6d0c957929cc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"76df8267-a4e8-4b23-8d9d-6d0c957929cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.864628 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76df8267-a4e8-4b23-8d9d-6d0c957929cc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"76df8267-a4e8-4b23-8d9d-6d0c957929cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:55:53 crc kubenswrapper[4719]: E1124 08:55:53.864956 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:54.364939564 +0000 UTC m=+130.696212816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.885560 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.935379 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mjzxt"] Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.937498 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.941739 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.963539 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-452z9" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.966845 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76df8267-a4e8-4b23-8d9d-6d0c957929cc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"76df8267-a4e8-4b23-8d9d-6d0c957929cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.966907 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.966960 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76df8267-a4e8-4b23-8d9d-6d0c957929cc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"76df8267-a4e8-4b23-8d9d-6d0c957929cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:55:53 crc kubenswrapper[4719]: I1124 08:55:53.967218 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76df8267-a4e8-4b23-8d9d-6d0c957929cc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"76df8267-a4e8-4b23-8d9d-6d0c957929cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:55:53 crc kubenswrapper[4719]: E1124 08:55:53.967713 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:54.467691566 +0000 UTC m=+130.798965018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.005122 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76df8267-a4e8-4b23-8d9d-6d0c957929cc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"76df8267-a4e8-4b23-8d9d-6d0c957929cc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.029290 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.047887 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mjzxt"] Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.071927 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:54 crc kubenswrapper[4719]: E1124 08:55:54.072170 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:54.572121234 +0000 UTC m=+130.903394486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.073372 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ckjc\" (UniqueName: \"kubernetes.io/projected/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-kube-api-access-6ckjc\") pod \"community-operators-mjzxt\" (UID: \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\") " pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.073594 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.073635 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-utilities\") pod \"community-operators-mjzxt\" (UID: \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\") " pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.073832 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-catalog-content\") pod \"community-operators-mjzxt\" (UID: \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\") " pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:55:54 crc kubenswrapper[4719]: E1124 08:55:54.074174 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:54.574148512 +0000 UTC m=+130.905421834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.085138 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.092384 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:55:54 crc kubenswrapper[4719]: [-]has-synced failed: reason withheld Nov 24 08:55:54 crc kubenswrapper[4719]: [+]process-running ok Nov 24 08:55:54 crc kubenswrapper[4719]: healthz check failed Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.092470 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.142453 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ljp9t"] Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.143571 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.174947 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.175185 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch4nr\" (UniqueName: \"kubernetes.io/projected/ff5bf07f-1775-4310-a0b3-5306a4202228-kube-api-access-ch4nr\") pod \"certified-operators-ljp9t\" (UID: \"ff5bf07f-1775-4310-a0b3-5306a4202228\") " pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:55:54 crc kubenswrapper[4719]: E1124 08:55:54.175231 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:54.675196405 +0000 UTC m=+131.006469657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.175354 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.175432 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-utilities\") pod \"community-operators-mjzxt\" (UID: \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\") " pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.175561 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-catalog-content\") pod \"community-operators-mjzxt\" (UID: \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\") " pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.175643 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5bf07f-1775-4310-a0b3-5306a4202228-catalog-content\") pod \"certified-operators-ljp9t\" (UID: \"ff5bf07f-1775-4310-a0b3-5306a4202228\") " pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.175756 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5bf07f-1775-4310-a0b3-5306a4202228-utilities\") pod \"certified-operators-ljp9t\" (UID: \"ff5bf07f-1775-4310-a0b3-5306a4202228\") " pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.175870 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ckjc\" (UniqueName: \"kubernetes.io/projected/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-kube-api-access-6ckjc\") pod \"community-operators-mjzxt\" (UID: \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\") " pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:55:54 crc kubenswrapper[4719]: E1124 08:55:54.176642 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:54.676628155 +0000 UTC m=+131.007901407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.177318 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-utilities\") pod \"community-operators-mjzxt\" (UID: \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\") " pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.178729 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-catalog-content\") pod \"community-operators-mjzxt\" (UID: \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\") " pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.217677 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ljp9t"] Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.218231 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.258014 4719 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gtqd7 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.258384 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" podUID="76540cf5-0cd5-4282-b3c3-dd12105f0d4e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.258445 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzgz5" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.275317 4719 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gtqd7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.275456 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" podUID="76540cf5-0cd5-4282-b3c3-dd12105f0d4e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.282544 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.290222 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ckjc\" (UniqueName: \"kubernetes.io/projected/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-kube-api-access-6ckjc\") pod \"community-operators-mjzxt\" (UID: \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\") " pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:55:54 crc kubenswrapper[4719]: E1124 08:55:54.295211 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:54.795169465 +0000 UTC m=+131.126442717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.295351 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5bf07f-1775-4310-a0b3-5306a4202228-utilities\") pod \"certified-operators-ljp9t\" (UID: \"ff5bf07f-1775-4310-a0b3-5306a4202228\") " pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.295476 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch4nr\" (UniqueName: \"kubernetes.io/projected/ff5bf07f-1775-4310-a0b3-5306a4202228-kube-api-access-ch4nr\") pod \"certified-operators-ljp9t\" (UID: \"ff5bf07f-1775-4310-a0b3-5306a4202228\") " pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.295545 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.295743 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5bf07f-1775-4310-a0b3-5306a4202228-catalog-content\") pod \"certified-operators-ljp9t\" (UID: \"ff5bf07f-1775-4310-a0b3-5306a4202228\") " pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:55:54 crc kubenswrapper[4719]: E1124 08:55:54.296537 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:54.796516494 +0000 UTC m=+131.127789746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.298268 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5bf07f-1775-4310-a0b3-5306a4202228-catalog-content\") pod \"certified-operators-ljp9t\" (UID: \"ff5bf07f-1775-4310-a0b3-5306a4202228\") " pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.298381 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5bf07f-1775-4310-a0b3-5306a4202228-utilities\") pod \"certified-operators-ljp9t\" (UID: \"ff5bf07f-1775-4310-a0b3-5306a4202228\") " pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.348101 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch4nr\" (UniqueName: \"kubernetes.io/projected/ff5bf07f-1775-4310-a0b3-5306a4202228-kube-api-access-ch4nr\") pod \"certified-operators-ljp9t\" (UID: \"ff5bf07f-1775-4310-a0b3-5306a4202228\") " pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.375136 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tdvl4"] Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.376842 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.397108 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:54 crc kubenswrapper[4719]: E1124 08:55:54.397826 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:54.897782333 +0000 UTC m=+131.229055585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.408138 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ec96ae-4756-4249-b370-ce98fbe47db0-utilities\") pod \"community-operators-tdvl4\" (UID: \"45ec96ae-4756-4249-b370-ce98fbe47db0\") " pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.408294 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.408413 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ec96ae-4756-4249-b370-ce98fbe47db0-catalog-content\") pod \"community-operators-tdvl4\" (UID: \"45ec96ae-4756-4249-b370-ce98fbe47db0\") " pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:55:54 crc kubenswrapper[4719]: E1124 08:55:54.409477 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:54.909458625 +0000 UTC m=+131.240731877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.410557 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tdvl4"] Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.448135 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gjq99" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.474558 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.511310 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:54 crc kubenswrapper[4719]: E1124 08:55:54.511815 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:55.011783314 +0000 UTC m=+131.343056566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.511891 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.511967 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ec96ae-4756-4249-b370-ce98fbe47db0-catalog-content\") pod \"community-operators-tdvl4\" (UID: \"45ec96ae-4756-4249-b370-ce98fbe47db0\") " pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.512009 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx8k5\" (UniqueName: \"kubernetes.io/projected/45ec96ae-4756-4249-b370-ce98fbe47db0-kube-api-access-qx8k5\") pod \"community-operators-tdvl4\" (UID: \"45ec96ae-4756-4249-b370-ce98fbe47db0\") " pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.512085 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ec96ae-4756-4249-b370-ce98fbe47db0-utilities\") pod \"community-operators-tdvl4\" (UID: \"45ec96ae-4756-4249-b370-ce98fbe47db0\") " pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:55:54 crc kubenswrapper[4719]: E1124 08:55:54.512920 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:55.012911916 +0000 UTC m=+131.344185168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.513681 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ec96ae-4756-4249-b370-ce98fbe47db0-catalog-content\") pod \"community-operators-tdvl4\" (UID: \"45ec96ae-4756-4249-b370-ce98fbe47db0\") " pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.513907 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ec96ae-4756-4249-b370-ce98fbe47db0-utilities\") pod \"community-operators-tdvl4\" (UID: \"45ec96ae-4756-4249-b370-ce98fbe47db0\") " pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.541419 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nrlw6"] Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.542487 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.554430 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.566641 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nrlw6"] Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.612918 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.613135 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-utilities\") pod \"certified-operators-nrlw6\" (UID: \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\") " pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.613177 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-catalog-content\") pod \"certified-operators-nrlw6\" (UID: \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\") " pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.613208 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4szrw\" (UniqueName: \"kubernetes.io/projected/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-kube-api-access-4szrw\") pod \"certified-operators-nrlw6\" (UID: \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\") " pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.613232 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx8k5\" (UniqueName: \"kubernetes.io/projected/45ec96ae-4756-4249-b370-ce98fbe47db0-kube-api-access-qx8k5\") pod \"community-operators-tdvl4\" (UID: \"45ec96ae-4756-4249-b370-ce98fbe47db0\") " pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:55:54 crc kubenswrapper[4719]: E1124 08:55:54.613656 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:55.113626509 +0000 UTC m=+131.444899761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.679290 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx8k5\" (UniqueName: \"kubernetes.io/projected/45ec96ae-4756-4249-b370-ce98fbe47db0-kube-api-access-qx8k5\") pod \"community-operators-tdvl4\" (UID: \"45ec96ae-4756-4249-b370-ce98fbe47db0\") " pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.708060 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.714510 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-utilities\") pod \"certified-operators-nrlw6\" (UID: \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\") " pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.714582 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-catalog-content\") pod \"certified-operators-nrlw6\" (UID: \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\") " pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.714621 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4szrw\" (UniqueName: \"kubernetes.io/projected/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-kube-api-access-4szrw\") pod \"certified-operators-nrlw6\" (UID: \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\") " pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.714690 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:54 crc kubenswrapper[4719]: E1124 08:55:54.715144 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:55.215129225 +0000 UTC m=+131.546402477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.715734 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-utilities\") pod \"certified-operators-nrlw6\" (UID: \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\") " pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.715962 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-catalog-content\") pod \"certified-operators-nrlw6\" (UID: \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\") " pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.762468 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4szrw\" (UniqueName: \"kubernetes.io/projected/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-kube-api-access-4szrw\") pod \"certified-operators-nrlw6\" (UID: \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\") " pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.815825 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:54 crc kubenswrapper[4719]: E1124 08:55:54.816521 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:55.316502396 +0000 UTC m=+131.647775648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.918163 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:54 crc kubenswrapper[4719]: E1124 08:55:54.918657 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:55.418640539 +0000 UTC m=+131.749913801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:54 crc kubenswrapper[4719]: I1124 08:55:54.920435 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.005165 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" event={"ID":"732e3b35-79a1-47d8-bc13-44ddffb8de36","Type":"ContainerStarted","Data":"61baa9f92c71483783732ea652d299173acb6f7cced358fa1291654135d9854c"} Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.022927 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:55 crc kubenswrapper[4719]: E1124 08:55:55.024147 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:55.524127248 +0000 UTC m=+131.855400500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.063248 4719 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-g48p5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.33:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.063695 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" podUID="6818985a-ffd6-4447-bafe-624296df6660" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.33:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.077177 4719 patch_prober.go:28] interesting pod/console-operator-58897d9998-mn2gk container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.077271 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-mn2gk" podUID="d97493d9-bce3-4ee4-9e4b-5382442ad977" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.080249 4719 patch_prober.go:28] interesting pod/console-operator-58897d9998-mn2gk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.080374 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mn2gk" podUID="d97493d9-bce3-4ee4-9e4b-5382442ad977" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.100371 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:55:55 crc kubenswrapper[4719]: [-]has-synced failed: reason withheld Nov 24 08:55:55 crc kubenswrapper[4719]: [+]process-running ok Nov 24 08:55:55 crc kubenswrapper[4719]: healthz check failed Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.100525 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.127476 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:55 crc kubenswrapper[4719]: E1124 08:55:55.127967 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:55.62795012 +0000 UTC m=+131.959223372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.198709 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-52tkz" Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.230946 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:55 crc kubenswrapper[4719]: E1124 08:55:55.232277 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:55.732259185 +0000 UTC m=+132.063532437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.236160 4719 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ctxrd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.236216 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" podUID="dd37c67b-4f85-4ae8-b9ad-27d63aadca79" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.236345 4719 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ctxrd container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.236438 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" podUID="dd37c67b-4f85-4ae8-b9ad-27d63aadca79" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.332875 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:55 crc kubenswrapper[4719]: E1124 08:55:55.333319 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:55.833306048 +0000 UTC m=+132.164579300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.378283 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.435550 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:55 crc kubenswrapper[4719]: E1124 08:55:55.436190 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:55.936161972 +0000 UTC m=+132.267435224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.538695 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:55 crc kubenswrapper[4719]: E1124 08:55:55.539171 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:56.03915336 +0000 UTC m=+132.370426612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.665349 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:55 crc kubenswrapper[4719]: E1124 08:55:55.666174 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:56.166147011 +0000 UTC m=+132.497420263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.768662 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:55 crc kubenswrapper[4719]: E1124 08:55:55.769249 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:56.26923189 +0000 UTC m=+132.600505142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.869860 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:55 crc kubenswrapper[4719]: E1124 08:55:55.870262 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:56.370240567 +0000 UTC m=+132.701513819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:55 crc kubenswrapper[4719]: I1124 08:55:55.981273 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:55 crc kubenswrapper[4719]: E1124 08:55:55.981749 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:56.481723136 +0000 UTC m=+132.812996578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.031438 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.033814 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.037953 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"76df8267-a4e8-4b23-8d9d-6d0c957929cc","Type":"ContainerStarted","Data":"a215826fb649253a9c06272e9ce303bdae4f1cd4cd3460dda4a6972ea6145732"} Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.051912 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.052245 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.082829 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.084113 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 08:55:56 crc kubenswrapper[4719]: E1124 08:55:56.085490 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:56.585438954 +0000 UTC m=+132.916712216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.086349 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:56 crc kubenswrapper[4719]: E1124 08:55:56.086818 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:56.586806784 +0000 UTC m=+132.918080236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.120680 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:55:56 crc kubenswrapper[4719]: [-]has-synced failed: reason withheld Nov 24 08:55:56 crc kubenswrapper[4719]: [+]process-running ok Nov 24 08:55:56 crc kubenswrapper[4719]: healthz check failed Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.120774 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.190882 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.191216 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69d62168-687f-4a81-a68e-b2f0a4323967-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"69d62168-687f-4a81-a68e-b2f0a4323967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:55:56 crc kubenswrapper[4719]: E1124 08:55:56.191430 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:56.691390488 +0000 UTC m=+133.022663790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.191531 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.191847 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69d62168-687f-4a81-a68e-b2f0a4323967-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"69d62168-687f-4a81-a68e-b2f0a4323967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:55:56 crc kubenswrapper[4719]: E1124 08:55:56.194591 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:56.694564632 +0000 UTC m=+133.025838084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.251751 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lszz2"] Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.257455 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.290757 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.296058 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.296333 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69d62168-687f-4a81-a68e-b2f0a4323967-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"69d62168-687f-4a81-a68e-b2f0a4323967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.296432 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69d62168-687f-4a81-a68e-b2f0a4323967-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"69d62168-687f-4a81-a68e-b2f0a4323967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.296631 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69d62168-687f-4a81-a68e-b2f0a4323967-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"69d62168-687f-4a81-a68e-b2f0a4323967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:55:56 crc kubenswrapper[4719]: E1124 08:55:56.296662 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:56.796616131 +0000 UTC m=+133.127889393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.306397 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszz2"] Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.325916 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mjzxt"] Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.363328 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69d62168-687f-4a81-a68e-b2f0a4323967-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"69d62168-687f-4a81-a68e-b2f0a4323967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.398233 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf4gs\" (UniqueName: \"kubernetes.io/projected/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-kube-api-access-tf4gs\") pod \"redhat-marketplace-lszz2\" (UID: \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\") " pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.398549 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-utilities\") pod \"redhat-marketplace-lszz2\" (UID: \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\") " pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.398704 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.398813 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-catalog-content\") pod \"redhat-marketplace-lszz2\" (UID: \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\") " pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:55:56 crc kubenswrapper[4719]: E1124 08:55:56.399414 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:56.899398451 +0000 UTC m=+133.230671703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.424450 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.517862 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.518674 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-catalog-content\") pod \"redhat-marketplace-lszz2\" (UID: \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\") " pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.518743 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf4gs\" (UniqueName: \"kubernetes.io/projected/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-kube-api-access-tf4gs\") pod \"redhat-marketplace-lszz2\" (UID: \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\") " pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.518777 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-utilities\") pod \"redhat-marketplace-lszz2\" (UID: \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\") " pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.519360 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-utilities\") pod \"redhat-marketplace-lszz2\" (UID: \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\") " pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:55:56 crc kubenswrapper[4719]: E1124 08:55:56.519462 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:57.019442735 +0000 UTC m=+133.350715987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.520246 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-catalog-content\") pod \"redhat-marketplace-lszz2\" (UID: \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\") " pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.619112 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ljp9t"] Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.620126 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:56 crc kubenswrapper[4719]: E1124 08:55:56.620521 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:57.120506904 +0000 UTC m=+133.451780166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.628236 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nn46q"] Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.629386 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.685968 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nn46q"] Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.698577 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf4gs\" (UniqueName: \"kubernetes.io/projected/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-kube-api-access-tf4gs\") pod \"redhat-marketplace-lszz2\" (UID: \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\") " pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.727857 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.728251 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/687a3665-1f60-48cf-ad90-013c77a6fefb-utilities\") pod \"redhat-marketplace-nn46q\" (UID: \"687a3665-1f60-48cf-ad90-013c77a6fefb\") " pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.728313 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/687a3665-1f60-48cf-ad90-013c77a6fefb-catalog-content\") pod \"redhat-marketplace-nn46q\" (UID: \"687a3665-1f60-48cf-ad90-013c77a6fefb\") " pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.728349 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k99vx\" (UniqueName: \"kubernetes.io/projected/687a3665-1f60-48cf-ad90-013c77a6fefb-kube-api-access-k99vx\") pod \"redhat-marketplace-nn46q\" (UID: \"687a3665-1f60-48cf-ad90-013c77a6fefb\") " pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:55:56 crc kubenswrapper[4719]: E1124 08:55:56.728545 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:57.22852266 +0000 UTC m=+133.559795922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.832988 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.833281 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/687a3665-1f60-48cf-ad90-013c77a6fefb-utilities\") pod \"redhat-marketplace-nn46q\" (UID: \"687a3665-1f60-48cf-ad90-013c77a6fefb\") " pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.833320 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/687a3665-1f60-48cf-ad90-013c77a6fefb-catalog-content\") pod \"redhat-marketplace-nn46q\" (UID: \"687a3665-1f60-48cf-ad90-013c77a6fefb\") " pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.833340 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k99vx\" (UniqueName: \"kubernetes.io/projected/687a3665-1f60-48cf-ad90-013c77a6fefb-kube-api-access-k99vx\") pod \"redhat-marketplace-nn46q\" (UID: \"687a3665-1f60-48cf-ad90-013c77a6fefb\") " pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.834225 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/687a3665-1f60-48cf-ad90-013c77a6fefb-utilities\") pod \"redhat-marketplace-nn46q\" (UID: \"687a3665-1f60-48cf-ad90-013c77a6fefb\") " pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.834451 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/687a3665-1f60-48cf-ad90-013c77a6fefb-catalog-content\") pod \"redhat-marketplace-nn46q\" (UID: \"687a3665-1f60-48cf-ad90-013c77a6fefb\") " pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:55:56 crc kubenswrapper[4719]: E1124 08:55:56.834639 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:57.334627579 +0000 UTC m=+133.665900831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.894358 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k99vx\" (UniqueName: \"kubernetes.io/projected/687a3665-1f60-48cf-ad90-013c77a6fefb-kube-api-access-k99vx\") pod \"redhat-marketplace-nn46q\" (UID: \"687a3665-1f60-48cf-ad90-013c77a6fefb\") " pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.896532 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.939168 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:56 crc kubenswrapper[4719]: E1124 08:55:56.940207 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:57.440166691 +0000 UTC m=+133.771439943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:56 crc kubenswrapper[4719]: I1124 08:55:56.978587 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.038832 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nrlw6"] Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.041528 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:57 crc kubenswrapper[4719]: E1124 08:55:57.042009 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:57.541991793 +0000 UTC m=+133.873265045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.056128 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" event={"ID":"732e3b35-79a1-47d8-bc13-44ddffb8de36","Type":"ContainerStarted","Data":"0a02dd4ff5d01a26a77ba7173ef1b69be4c6db2568901e0346baa3c228ec34f9"} Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.102256 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"76df8267-a4e8-4b23-8d9d-6d0c957929cc","Type":"ContainerStarted","Data":"d7646ae9f5e78c15ee3db9bf538d20627be955c3879051afcfb63ad3f73d06cd"} Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.103388 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:55:57 crc kubenswrapper[4719]: [-]has-synced failed: reason withheld Nov 24 08:55:57 crc kubenswrapper[4719]: [+]process-running ok Nov 24 08:55:57 crc kubenswrapper[4719]: healthz check failed Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.103475 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.126497 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sw8vr"] Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.133950 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.137603 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.145082 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:57 crc kubenswrapper[4719]: E1124 08:55:57.145283 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:57.645234796 +0000 UTC m=+133.976508048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.145949 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.146982 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.146961488 podStartE2EDuration="4.146961488s" podCreationTimestamp="2025-11-24 08:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:57.145298618 +0000 UTC m=+133.476571880" watchObservedRunningTime="2025-11-24 08:55:57.146961488 +0000 UTC m=+133.478234750" Nov 24 08:55:57 crc kubenswrapper[4719]: E1124 08:55:57.147027 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:57.646643428 +0000 UTC m=+133.977916700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.151016 4719 generic.go:334] "Generic (PLEG): container finished" podID="30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" containerID="c5ba1847ace8a421e5ddb4d05ffcd15d84637575b539e21b90176f7a72db8b65" exitCode=0 Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.151154 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjzxt" event={"ID":"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5","Type":"ContainerDied","Data":"c5ba1847ace8a421e5ddb4d05ffcd15d84637575b539e21b90176f7a72db8b65"} Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.151192 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjzxt" event={"ID":"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5","Type":"ContainerStarted","Data":"a823c705c434c6663d75b523cb76c9ca65f0ecc5e1e41ed6bcc56d6d6f367756"} Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.154991 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljp9t" event={"ID":"ff5bf07f-1775-4310-a0b3-5306a4202228","Type":"ContainerStarted","Data":"6af65096cd566564aace66ead967740b5178239dcd776419e469ad8d6d171f3c"} Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.206093 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sw8vr"] Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.225467 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.247544 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tdvl4"] Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.248810 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.249403 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d599ee52-0a8d-4f3b-8ffe-624b8d580382-catalog-content\") pod \"redhat-operators-sw8vr\" (UID: \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\") " pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.249476 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plvjn\" (UniqueName: \"kubernetes.io/projected/d599ee52-0a8d-4f3b-8ffe-624b8d580382-kube-api-access-plvjn\") pod \"redhat-operators-sw8vr\" (UID: \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\") " pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.249569 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d599ee52-0a8d-4f3b-8ffe-624b8d580382-utilities\") pod \"redhat-operators-sw8vr\" (UID: \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\") " pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:55:57 crc kubenswrapper[4719]: E1124 08:55:57.250727 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:57.750687136 +0000 UTC m=+134.081960398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.374619 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d599ee52-0a8d-4f3b-8ffe-624b8d580382-catalog-content\") pod \"redhat-operators-sw8vr\" (UID: \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\") " pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.375168 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plvjn\" (UniqueName: \"kubernetes.io/projected/d599ee52-0a8d-4f3b-8ffe-624b8d580382-kube-api-access-plvjn\") pod \"redhat-operators-sw8vr\" (UID: \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\") " pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.375255 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d599ee52-0a8d-4f3b-8ffe-624b8d580382-utilities\") pod \"redhat-operators-sw8vr\" (UID: \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\") " pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.375295 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:57 crc kubenswrapper[4719]: E1124 08:55:57.375833 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:57.875815901 +0000 UTC m=+134.207089153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.376429 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d599ee52-0a8d-4f3b-8ffe-624b8d580382-catalog-content\") pod \"redhat-operators-sw8vr\" (UID: \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\") " pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.376940 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d599ee52-0a8d-4f3b-8ffe-624b8d580382-utilities\") pod \"redhat-operators-sw8vr\" (UID: \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\") " pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.434093 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plvjn\" (UniqueName: \"kubernetes.io/projected/d599ee52-0a8d-4f3b-8ffe-624b8d580382-kube-api-access-plvjn\") pod \"redhat-operators-sw8vr\" (UID: \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\") " pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.477756 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.478278 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:55:57 crc kubenswrapper[4719]: E1124 08:55:57.478614 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:57.978587171 +0000 UTC m=+134.309860423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.479724 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:57 crc kubenswrapper[4719]: E1124 08:55:57.481626 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:57.981606011 +0000 UTC m=+134.312879263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.538127 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kb6cr"] Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.540398 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.584459 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:57 crc kubenswrapper[4719]: E1124 08:55:57.584939 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:58.084919347 +0000 UTC m=+134.416192599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.591603 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kb6cr"] Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.679786 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.685933 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc5n4\" (UniqueName: \"kubernetes.io/projected/752be1f4-8bf3-403b-a203-bae1d69d05bb-kube-api-access-kc5n4\") pod \"redhat-operators-kb6cr\" (UID: \"752be1f4-8bf3-403b-a203-bae1d69d05bb\") " pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.686025 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.686094 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/752be1f4-8bf3-403b-a203-bae1d69d05bb-utilities\") pod \"redhat-operators-kb6cr\" (UID: \"752be1f4-8bf3-403b-a203-bae1d69d05bb\") " pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.686147 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/752be1f4-8bf3-403b-a203-bae1d69d05bb-catalog-content\") pod \"redhat-operators-kb6cr\" (UID: \"752be1f4-8bf3-403b-a203-bae1d69d05bb\") " pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:55:57 crc kubenswrapper[4719]: E1124 08:55:57.686762 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:58.186744138 +0000 UTC m=+134.518017390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.789477 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.789757 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc5n4\" (UniqueName: \"kubernetes.io/projected/752be1f4-8bf3-403b-a203-bae1d69d05bb-kube-api-access-kc5n4\") pod \"redhat-operators-kb6cr\" (UID: \"752be1f4-8bf3-403b-a203-bae1d69d05bb\") " pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.789824 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/752be1f4-8bf3-403b-a203-bae1d69d05bb-utilities\") pod \"redhat-operators-kb6cr\" (UID: \"752be1f4-8bf3-403b-a203-bae1d69d05bb\") " pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.789862 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/752be1f4-8bf3-403b-a203-bae1d69d05bb-catalog-content\") pod \"redhat-operators-kb6cr\" (UID: \"752be1f4-8bf3-403b-a203-bae1d69d05bb\") " pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:55:57 crc kubenswrapper[4719]: E1124 08:55:57.789954 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:58.289937001 +0000 UTC m=+134.621210243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.809516 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/752be1f4-8bf3-403b-a203-bae1d69d05bb-utilities\") pod \"redhat-operators-kb6cr\" (UID: \"752be1f4-8bf3-403b-a203-bae1d69d05bb\") " pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.809800 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/752be1f4-8bf3-403b-a203-bae1d69d05bb-catalog-content\") pod \"redhat-operators-kb6cr\" (UID: \"752be1f4-8bf3-403b-a203-bae1d69d05bb\") " pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.863159 4719 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fr4v7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 24 08:55:57 crc kubenswrapper[4719]: [+]log ok Nov 24 08:55:57 crc kubenswrapper[4719]: [+]etcd ok Nov 24 08:55:57 crc kubenswrapper[4719]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 24 08:55:57 crc kubenswrapper[4719]: [+]poststarthook/generic-apiserver-start-informers ok Nov 24 08:55:57 crc kubenswrapper[4719]: [+]poststarthook/max-in-flight-filter ok Nov 24 08:55:57 crc kubenswrapper[4719]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 24 08:55:57 crc kubenswrapper[4719]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 24 08:55:57 crc kubenswrapper[4719]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 24 08:55:57 crc kubenswrapper[4719]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Nov 24 08:55:57 crc kubenswrapper[4719]: [+]poststarthook/project.openshift.io-projectcache ok Nov 24 08:55:57 crc kubenswrapper[4719]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 24 08:55:57 crc kubenswrapper[4719]: [+]poststarthook/openshift.io-startinformers ok Nov 24 08:55:57 crc kubenswrapper[4719]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 24 08:55:57 crc kubenswrapper[4719]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 24 08:55:57 crc kubenswrapper[4719]: livez check failed Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.863550 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" podUID="6fd95d6b-226e-4eef-a232-85205a89d877" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.875695 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc5n4\" (UniqueName: \"kubernetes.io/projected/752be1f4-8bf3-403b-a203-bae1d69d05bb-kube-api-access-kc5n4\") pod \"redhat-operators-kb6cr\" (UID: \"752be1f4-8bf3-403b-a203-bae1d69d05bb\") " pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.891113 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:57 crc kubenswrapper[4719]: E1124 08:55:57.891645 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:58.391627988 +0000 UTC m=+134.722901240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.945233 4719 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.956788 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszz2"] Nov 24 08:55:57 crc kubenswrapper[4719]: W1124 08:55:57.980406 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4dd48f4_5b1b_4e66_9a2a_38d5005672b3.slice/crio-a8b0b17ca4588dc1882423816ffbc4a62e3c60fb354e2a392d0409f9aecca8da WatchSource:0}: Error finding container a8b0b17ca4588dc1882423816ffbc4a62e3c60fb354e2a392d0409f9aecca8da: Status 404 returned error can't find the container with id a8b0b17ca4588dc1882423816ffbc4a62e3c60fb354e2a392d0409f9aecca8da Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.992285 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:57 crc kubenswrapper[4719]: E1124 08:55:57.992575 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:58.492540403 +0000 UTC m=+134.823813655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:57 crc kubenswrapper[4719]: I1124 08:55:57.992749 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:57 crc kubenswrapper[4719]: E1124 08:55:57.993473 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:58.4934653 +0000 UTC m=+134.824738552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.056430 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nn46q"] Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.094368 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:58 crc kubenswrapper[4719]: E1124 08:55:58.096071 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:58.596044904 +0000 UTC m=+134.927318156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.096131 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:55:58 crc kubenswrapper[4719]: [-]has-synced failed: reason withheld Nov 24 08:55:58 crc kubenswrapper[4719]: [+]process-running ok Nov 24 08:55:58 crc kubenswrapper[4719]: healthz check failed Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.096256 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.105466 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.202425 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:58 crc kubenswrapper[4719]: E1124 08:55:58.203314 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:58.703267957 +0000 UTC m=+135.034541209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.243291 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszz2" event={"ID":"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3","Type":"ContainerStarted","Data":"a8b0b17ca4588dc1882423816ffbc4a62e3c60fb354e2a392d0409f9aecca8da"} Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.295477 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" event={"ID":"732e3b35-79a1-47d8-bc13-44ddffb8de36","Type":"ContainerStarted","Data":"0e13463f8d1d732a1d4028b8c2e81f680f9d98a4c4da082ff6a93549f2a0c6a5"} Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.310876 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:58 crc kubenswrapper[4719]: E1124 08:55:58.311525 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:58.811503159 +0000 UTC m=+135.142776421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.322646 4719 generic.go:334] "Generic (PLEG): container finished" podID="76df8267-a4e8-4b23-8d9d-6d0c957929cc" containerID="d7646ae9f5e78c15ee3db9bf538d20627be955c3879051afcfb63ad3f73d06cd" exitCode=0 Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.322738 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"76df8267-a4e8-4b23-8d9d-6d0c957929cc","Type":"ContainerDied","Data":"d7646ae9f5e78c15ee3db9bf538d20627be955c3879051afcfb63ad3f73d06cd"} Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.327523 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nn46q" event={"ID":"687a3665-1f60-48cf-ad90-013c77a6fefb","Type":"ContainerStarted","Data":"9ac5a3919df87325fd491d02c1845beaa4af8d4f4f97403d943d30406d750bdd"} Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.337229 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrlw6" event={"ID":"d18df24f-85d5-4acf-9469-1bd2c80a3ea6","Type":"ContainerStarted","Data":"52d247be6023b2dd2745caca216ab35c7c162933b2f292ddb97ec844a2480d5a"} Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.364722 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"69d62168-687f-4a81-a68e-b2f0a4323967","Type":"ContainerStarted","Data":"c4069199b621c46ca5102a4cabf1d0c6e4ef354f42301ab5c9af3e4fd41aa7f3"} Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.393791 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdvl4" event={"ID":"45ec96ae-4756-4249-b370-ce98fbe47db0","Type":"ContainerStarted","Data":"d4618ccafa62b53b7a20dbdbf61c7311e47558325303c4a1e068e81901cdbad4"} Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.412482 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:58 crc kubenswrapper[4719]: E1124 08:55:58.415373 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:58.9153516 +0000 UTC m=+135.246624852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.426329 4719 generic.go:334] "Generic (PLEG): container finished" podID="ff5bf07f-1775-4310-a0b3-5306a4202228" containerID="e5769e58559da46c5a70db795390fbcf2e0f16b3cd5bfe919636007fda95cdb7" exitCode=0 Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.426390 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljp9t" event={"ID":"ff5bf07f-1775-4310-a0b3-5306a4202228","Type":"ContainerDied","Data":"e5769e58559da46c5a70db795390fbcf2e0f16b3cd5bfe919636007fda95cdb7"} Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.455648 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sw8vr"] Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.515350 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:58 crc kubenswrapper[4719]: E1124 08:55:58.515504 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:59.015479951 +0000 UTC m=+135.346753203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.515925 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:58 crc kubenswrapper[4719]: E1124 08:55:58.516508 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:59.016492931 +0000 UTC m=+135.347766183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.617786 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:58 crc kubenswrapper[4719]: E1124 08:55:58.618802 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 08:55:59.118780566 +0000 UTC m=+135.450053818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.662937 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kb6cr"] Nov 24 08:55:58 crc kubenswrapper[4719]: W1124 08:55:58.677797 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod752be1f4_8bf3_403b_a203_bae1d69d05bb.slice/crio-078a37c3166ba3c4ed74be993387b29acba39dd6f19f237af7528b5ed2f81488 WatchSource:0}: Error finding container 078a37c3166ba3c4ed74be993387b29acba39dd6f19f237af7528b5ed2f81488: Status 404 returned error can't find the container with id 078a37c3166ba3c4ed74be993387b29acba39dd6f19f237af7528b5ed2f81488 Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.720743 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:58 crc kubenswrapper[4719]: E1124 08:55:58.721522 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 08:55:59.221496445 +0000 UTC m=+135.552769697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j26j4" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.767473 4719 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-24T08:55:57.945266545Z","Handler":null,"Name":""} Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.781523 4719 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.781590 4719 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.833771 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.867747 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.940690 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.970735 4719 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 08:55:58 crc kubenswrapper[4719]: I1124 08:55:58.970807 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.012610 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j26j4\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.092980 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:55:59 crc kubenswrapper[4719]: [-]has-synced failed: reason withheld Nov 24 08:55:59 crc kubenswrapper[4719]: [+]process-running ok Nov 24 08:55:59 crc kubenswrapper[4719]: healthz check failed Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.093502 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.093530 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.101277 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.281913 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-z8p7k" Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.468311 4719 generic.go:334] "Generic (PLEG): container finished" podID="45ec96ae-4756-4249-b370-ce98fbe47db0" containerID="8d229ccfea1c8ab84e854f4ff482d51827855c705a18bdd78ee14b6c16862deb" exitCode=0 Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.468414 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdvl4" event={"ID":"45ec96ae-4756-4249-b370-ce98fbe47db0","Type":"ContainerDied","Data":"8d229ccfea1c8ab84e854f4ff482d51827855c705a18bdd78ee14b6c16862deb"} Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.480750 4719 generic.go:334] "Generic (PLEG): container finished" podID="f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" containerID="5808eecdbc6c18f76f0ebc9dc7b0ca6fba986985d2f5c54b9ef729f446b36ada" exitCode=0 Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.480861 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszz2" event={"ID":"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3","Type":"ContainerDied","Data":"5808eecdbc6c18f76f0ebc9dc7b0ca6fba986985d2f5c54b9ef729f446b36ada"} Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.514385 4719 generic.go:334] "Generic (PLEG): container finished" podID="687a3665-1f60-48cf-ad90-013c77a6fefb" containerID="f6486f062e64cc507d0e7805e04cdd9a698ba96307668c433b5e6e494df52944" exitCode=0 Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.514533 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nn46q" event={"ID":"687a3665-1f60-48cf-ad90-013c77a6fefb","Type":"ContainerDied","Data":"f6486f062e64cc507d0e7805e04cdd9a698ba96307668c433b5e6e494df52944"} Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.544519 4719 generic.go:334] "Generic (PLEG): container finished" podID="752be1f4-8bf3-403b-a203-bae1d69d05bb" containerID="91f8d514bde78a8abad141e071cec6b19fdd2b0c6db62a2b4f205d6441501814" exitCode=0 Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.545307 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb6cr" event={"ID":"752be1f4-8bf3-403b-a203-bae1d69d05bb","Type":"ContainerDied","Data":"91f8d514bde78a8abad141e071cec6b19fdd2b0c6db62a2b4f205d6441501814"} Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.545368 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb6cr" event={"ID":"752be1f4-8bf3-403b-a203-bae1d69d05bb","Type":"ContainerStarted","Data":"078a37c3166ba3c4ed74be993387b29acba39dd6f19f237af7528b5ed2f81488"} Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.567528 4719 generic.go:334] "Generic (PLEG): container finished" podID="d18df24f-85d5-4acf-9469-1bd2c80a3ea6" containerID="bff726acffc37713ded171a14f983862cdb7ac857f1eaebd0efce4e48c743e14" exitCode=0 Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.567598 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrlw6" event={"ID":"d18df24f-85d5-4acf-9469-1bd2c80a3ea6","Type":"ContainerDied","Data":"bff726acffc37713ded171a14f983862cdb7ac857f1eaebd0efce4e48c743e14"} Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.583952 4719 generic.go:334] "Generic (PLEG): container finished" podID="69d62168-687f-4a81-a68e-b2f0a4323967" containerID="aacf9d0549d5f3d4be412c6f0235fb342c066c09b537c7633428555604f7fb3b" exitCode=0 Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.584109 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"69d62168-687f-4a81-a68e-b2f0a4323967","Type":"ContainerDied","Data":"aacf9d0549d5f3d4be412c6f0235fb342c066c09b537c7633428555604f7fb3b"} Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.602507 4719 generic.go:334] "Generic (PLEG): container finished" podID="d599ee52-0a8d-4f3b-8ffe-624b8d580382" containerID="d81ecf8f0247b1941c76f4a7a03b7fe2aea0809fb5967c2f6198deee6b2fe49c" exitCode=0 Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.603115 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sw8vr" event={"ID":"d599ee52-0a8d-4f3b-8ffe-624b8d580382","Type":"ContainerDied","Data":"d81ecf8f0247b1941c76f4a7a03b7fe2aea0809fb5967c2f6198deee6b2fe49c"} Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.603182 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sw8vr" event={"ID":"d599ee52-0a8d-4f3b-8ffe-624b8d580382","Type":"ContainerStarted","Data":"cc4e73d63d422aea0949e4ead62803a1773b0ebbe177f777e7f23a09a4b35b20"} Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.625631 4719 generic.go:334] "Generic (PLEG): container finished" podID="c0b74b9b-50d6-454d-b527-a5980f7d762e" containerID="1b59ee9d23e52510a03a492c3244794091c9706755f09a92fb9d69b40d78ef10" exitCode=0 Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.641460 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" event={"ID":"c0b74b9b-50d6-454d-b527-a5980f7d762e","Type":"ContainerDied","Data":"1b59ee9d23e52510a03a492c3244794091c9706755f09a92fb9d69b40d78ef10"} Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.758374 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-zzpsx" podStartSLOduration=18.758342564 podStartE2EDuration="18.758342564s" podCreationTimestamp="2025-11-24 08:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:55:59.702344087 +0000 UTC m=+136.033617339" watchObservedRunningTime="2025-11-24 08:55:59.758342564 +0000 UTC m=+136.089615816" Nov 24 08:55:59 crc kubenswrapper[4719]: I1124 08:55:59.761807 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j26j4"] Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.096763 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:56:00 crc kubenswrapper[4719]: [-]has-synced failed: reason withheld Nov 24 08:56:00 crc kubenswrapper[4719]: [+]process-running ok Nov 24 08:56:00 crc kubenswrapper[4719]: healthz check failed Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.096850 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.135376 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.279157 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76df8267-a4e8-4b23-8d9d-6d0c957929cc-kubelet-dir\") pod \"76df8267-a4e8-4b23-8d9d-6d0c957929cc\" (UID: \"76df8267-a4e8-4b23-8d9d-6d0c957929cc\") " Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.279634 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76df8267-a4e8-4b23-8d9d-6d0c957929cc-kube-api-access\") pod \"76df8267-a4e8-4b23-8d9d-6d0c957929cc\" (UID: \"76df8267-a4e8-4b23-8d9d-6d0c957929cc\") " Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.280153 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76df8267-a4e8-4b23-8d9d-6d0c957929cc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "76df8267-a4e8-4b23-8d9d-6d0c957929cc" (UID: "76df8267-a4e8-4b23-8d9d-6d0c957929cc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.280385 4719 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76df8267-a4e8-4b23-8d9d-6d0c957929cc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.302255 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76df8267-a4e8-4b23-8d9d-6d0c957929cc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "76df8267-a4e8-4b23-8d9d-6d0c957929cc" (UID: "76df8267-a4e8-4b23-8d9d-6d0c957929cc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.381742 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76df8267-a4e8-4b23-8d9d-6d0c957929cc-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.543836 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.646399 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.646363 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"76df8267-a4e8-4b23-8d9d-6d0c957929cc","Type":"ContainerDied","Data":"a215826fb649253a9c06272e9ce303bdae4f1cd4cd3460dda4a6972ea6145732"} Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.646594 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a215826fb649253a9c06272e9ce303bdae4f1cd4cd3460dda4a6972ea6145732" Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.654180 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" event={"ID":"6739d077-6441-4b90-8e23-be9b0e3cb12a","Type":"ContainerStarted","Data":"ee9704c64b685f4d29ec4fb2f8d566b62c3d8e9edfd61e6b5b05d2033101bb43"} Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.654234 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.654252 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" event={"ID":"6739d077-6441-4b90-8e23-be9b0e3cb12a","Type":"ContainerStarted","Data":"5806835d62c644405cbe2d39b70e86e45f5f0c318f4eb13380accaee23dbc20d"} Nov 24 08:56:00 crc kubenswrapper[4719]: I1124 08:56:00.692941 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" podStartSLOduration=114.692913078 podStartE2EDuration="1m54.692913078s" podCreationTimestamp="2025-11-24 08:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:56:00.683643682 +0000 UTC m=+137.014916954" watchObservedRunningTime="2025-11-24 08:56:00.692913078 +0000 UTC m=+137.024186320" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.001065 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.121930 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:56:01 crc kubenswrapper[4719]: [-]has-synced failed: reason withheld Nov 24 08:56:01 crc kubenswrapper[4719]: [+]process-running ok Nov 24 08:56:01 crc kubenswrapper[4719]: healthz check failed Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.122433 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.205685 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.306918 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.311861 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69d62168-687f-4a81-a68e-b2f0a4323967-kubelet-dir\") pod \"69d62168-687f-4a81-a68e-b2f0a4323967\" (UID: \"69d62168-687f-4a81-a68e-b2f0a4323967\") " Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.311920 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69d62168-687f-4a81-a68e-b2f0a4323967-kube-api-access\") pod \"69d62168-687f-4a81-a68e-b2f0a4323967\" (UID: \"69d62168-687f-4a81-a68e-b2f0a4323967\") " Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.312894 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69d62168-687f-4a81-a68e-b2f0a4323967-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "69d62168-687f-4a81-a68e-b2f0a4323967" (UID: "69d62168-687f-4a81-a68e-b2f0a4323967"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.322742 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d62168-687f-4a81-a68e-b2f0a4323967-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "69d62168-687f-4a81-a68e-b2f0a4323967" (UID: "69d62168-687f-4a81-a68e-b2f0a4323967"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.413991 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0b74b9b-50d6-454d-b527-a5980f7d762e-secret-volume\") pod \"c0b74b9b-50d6-454d-b527-a5980f7d762e\" (UID: \"c0b74b9b-50d6-454d-b527-a5980f7d762e\") " Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.415117 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0b74b9b-50d6-454d-b527-a5980f7d762e-config-volume\") pod \"c0b74b9b-50d6-454d-b527-a5980f7d762e\" (UID: \"c0b74b9b-50d6-454d-b527-a5980f7d762e\") " Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.415166 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlqjl\" (UniqueName: \"kubernetes.io/projected/c0b74b9b-50d6-454d-b527-a5980f7d762e-kube-api-access-hlqjl\") pod \"c0b74b9b-50d6-454d-b527-a5980f7d762e\" (UID: \"c0b74b9b-50d6-454d-b527-a5980f7d762e\") " Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.415973 4719 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69d62168-687f-4a81-a68e-b2f0a4323967-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.416021 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69d62168-687f-4a81-a68e-b2f0a4323967-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.416351 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0b74b9b-50d6-454d-b527-a5980f7d762e-config-volume" (OuterVolumeSpecName: "config-volume") pod "c0b74b9b-50d6-454d-b527-a5980f7d762e" (UID: "c0b74b9b-50d6-454d-b527-a5980f7d762e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.422145 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0b74b9b-50d6-454d-b527-a5980f7d762e-kube-api-access-hlqjl" (OuterVolumeSpecName: "kube-api-access-hlqjl") pod "c0b74b9b-50d6-454d-b527-a5980f7d762e" (UID: "c0b74b9b-50d6-454d-b527-a5980f7d762e"). InnerVolumeSpecName "kube-api-access-hlqjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.426388 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0b74b9b-50d6-454d-b527-a5980f7d762e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c0b74b9b-50d6-454d-b527-a5980f7d762e" (UID: "c0b74b9b-50d6-454d-b527-a5980f7d762e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.517890 4719 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0b74b9b-50d6-454d-b527-a5980f7d762e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.517955 4719 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0b74b9b-50d6-454d-b527-a5980f7d762e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.517967 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlqjl\" (UniqueName: \"kubernetes.io/projected/c0b74b9b-50d6-454d-b527-a5980f7d762e-kube-api-access-hlqjl\") on node \"crc\" DevicePath \"\"" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.678202 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.678537 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q" event={"ID":"c0b74b9b-50d6-454d-b527-a5980f7d762e","Type":"ContainerDied","Data":"3a183ce7b9a37d3ae336dc7b14f14f96643cb6db1d218dd0e1b51f6477638bb0"} Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.678584 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a183ce7b9a37d3ae336dc7b14f14f96643cb6db1d218dd0e1b51f6477638bb0" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.701024 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.701363 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"69d62168-687f-4a81-a68e-b2f0a4323967","Type":"ContainerDied","Data":"c4069199b621c46ca5102a4cabf1d0c6e4ef354f42301ab5c9af3e4fd41aa7f3"} Nov 24 08:56:01 crc kubenswrapper[4719]: I1124 08:56:01.701398 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4069199b621c46ca5102a4cabf1d0c6e4ef354f42301ab5c9af3e4fd41aa7f3" Nov 24 08:56:02 crc kubenswrapper[4719]: I1124 08:56:02.091818 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:56:02 crc kubenswrapper[4719]: [-]has-synced failed: reason withheld Nov 24 08:56:02 crc kubenswrapper[4719]: [+]process-running ok Nov 24 08:56:02 crc kubenswrapper[4719]: healthz check failed Nov 24 08:56:02 crc kubenswrapper[4719]: I1124 08:56:02.092301 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:56:02 crc kubenswrapper[4719]: I1124 08:56:02.834264 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:56:02 crc kubenswrapper[4719]: I1124 08:56:02.849580 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-fr4v7" Nov 24 08:56:02 crc kubenswrapper[4719]: I1124 08:56:02.933170 4719 patch_prober.go:28] interesting pod/console-f9d7485db-l4lt5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 24 08:56:02 crc kubenswrapper[4719]: I1124 08:56:02.933241 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-l4lt5" podUID="0437d205-eb04-4136-a158-01d8729c335c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 24 08:56:03 crc kubenswrapper[4719]: I1124 08:56:03.008706 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:56:03 crc kubenswrapper[4719]: I1124 08:56:03.008768 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:56:03 crc kubenswrapper[4719]: I1124 08:56:03.008810 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:56:03 crc kubenswrapper[4719]: I1124 08:56:03.008875 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:56:03 crc kubenswrapper[4719]: I1124 08:56:03.086718 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:56:03 crc kubenswrapper[4719]: [-]has-synced failed: reason withheld Nov 24 08:56:03 crc kubenswrapper[4719]: [+]process-running ok Nov 24 08:56:03 crc kubenswrapper[4719]: healthz check failed Nov 24 08:56:03 crc kubenswrapper[4719]: I1124 08:56:03.086825 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:56:04 crc kubenswrapper[4719]: I1124 08:56:04.066265 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:56:04 crc kubenswrapper[4719]: I1124 08:56:04.083487 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mn2gk" Nov 24 08:56:04 crc kubenswrapper[4719]: I1124 08:56:04.087288 4719 patch_prober.go:28] interesting pod/router-default-5444994796-4887s container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 08:56:04 crc kubenswrapper[4719]: [+]has-synced ok Nov 24 08:56:04 crc kubenswrapper[4719]: [+]process-running ok Nov 24 08:56:04 crc kubenswrapper[4719]: healthz check failed Nov 24 08:56:04 crc kubenswrapper[4719]: I1124 08:56:04.087415 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4887s" podUID="8389675e-5e4d-40d2-a5c8-b3e3587bf67e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 08:56:04 crc kubenswrapper[4719]: I1124 08:56:04.271918 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:56:04 crc kubenswrapper[4719]: I1124 08:56:04.313948 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctxrd" Nov 24 08:56:05 crc kubenswrapper[4719]: I1124 08:56:05.085621 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:56:05 crc kubenswrapper[4719]: I1124 08:56:05.088600 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-4887s" Nov 24 08:56:12 crc kubenswrapper[4719]: I1124 08:56:12.935958 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:56:12 crc kubenswrapper[4719]: I1124 08:56:12.949488 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.004507 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.004557 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.004637 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.004569 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.004737 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-bzb4s" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.005768 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"07651ba2ea1f0cc9d78f216f1536d255d99f1fb6da33f4d6184beee3b9a249d5"} pod="openshift-console/downloads-7954f5f757-bzb4s" containerMessage="Container download-server failed liveness probe, will be restarted" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.008974 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" containerID="cri-o://07651ba2ea1f0cc9d78f216f1536d255d99f1fb6da33f4d6184beee3b9a249d5" gracePeriod=2 Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.011980 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.012074 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.661275 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.661491 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.664460 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.665625 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.673332 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.685623 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.724201 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.763075 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.763161 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.766053 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.779903 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.792654 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:56:13 crc kubenswrapper[4719]: I1124 08:56:13.806820 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:56:14 crc kubenswrapper[4719]: I1124 08:56:14.030815 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:56:14 crc kubenswrapper[4719]: I1124 08:56:14.059792 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 08:56:15 crc kubenswrapper[4719]: I1124 08:56:15.067004 4719 generic.go:334] "Generic (PLEG): container finished" podID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerID="07651ba2ea1f0cc9d78f216f1536d255d99f1fb6da33f4d6184beee3b9a249d5" exitCode=0 Nov 24 08:56:15 crc kubenswrapper[4719]: I1124 08:56:15.067094 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bzb4s" event={"ID":"f181c2b3-1876-4446-b16e-fbbaba6f7c95","Type":"ContainerDied","Data":"07651ba2ea1f0cc9d78f216f1536d255d99f1fb6da33f4d6184beee3b9a249d5"} Nov 24 08:56:19 crc kubenswrapper[4719]: I1124 08:56:19.107067 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 08:56:23 crc kubenswrapper[4719]: I1124 08:56:23.003830 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:56:23 crc kubenswrapper[4719]: I1124 08:56:23.004248 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:56:23 crc kubenswrapper[4719]: I1124 08:56:23.883604 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mzg5s" Nov 24 08:56:29 crc kubenswrapper[4719]: I1124 08:56:29.731872 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs\") pod \"network-metrics-daemon-5hv9d\" (UID: \"bd6beab7-bbb8-4abb-98b1-60c1f8360757\") " pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:56:29 crc kubenswrapper[4719]: I1124 08:56:29.733918 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 08:56:29 crc kubenswrapper[4719]: I1124 08:56:29.747669 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd6beab7-bbb8-4abb-98b1-60c1f8360757-metrics-certs\") pod \"network-metrics-daemon-5hv9d\" (UID: \"bd6beab7-bbb8-4abb-98b1-60c1f8360757\") " pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:56:29 crc kubenswrapper[4719]: I1124 08:56:29.935289 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 08:56:29 crc kubenswrapper[4719]: I1124 08:56:29.943599 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5hv9d" Nov 24 08:56:33 crc kubenswrapper[4719]: I1124 08:56:33.002758 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:56:33 crc kubenswrapper[4719]: I1124 08:56:33.002832 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:56:34 crc kubenswrapper[4719]: I1124 08:56:34.562665 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 08:56:34 crc kubenswrapper[4719]: I1124 08:56:34.563158 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 08:56:43 crc kubenswrapper[4719]: I1124 08:56:43.002457 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:56:43 crc kubenswrapper[4719]: I1124 08:56:43.002901 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:56:53 crc kubenswrapper[4719]: I1124 08:56:53.002813 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:56:53 crc kubenswrapper[4719]: I1124 08:56:53.003485 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:56:53 crc kubenswrapper[4719]: E1124 08:56:53.972370 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 08:56:53 crc kubenswrapper[4719]: E1124 08:56:53.972951 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf4gs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lszz2_openshift-marketplace(f4dd48f4-5b1b-4e66-9a2a-38d5005672b3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:56:53 crc kubenswrapper[4719]: E1124 08:56:53.974117 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lszz2" podUID="f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" Nov 24 08:56:57 crc kubenswrapper[4719]: E1124 08:56:57.570997 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lszz2" podUID="f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" Nov 24 08:56:57 crc kubenswrapper[4719]: E1124 08:56:57.656060 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 08:56:57 crc kubenswrapper[4719]: E1124 08:56:57.656233 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kc5n4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kb6cr_openshift-marketplace(752be1f4-8bf3-403b-a203-bae1d69d05bb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:56:57 crc kubenswrapper[4719]: E1124 08:56:57.658129 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kb6cr" podUID="752be1f4-8bf3-403b-a203-bae1d69d05bb" Nov 24 08:56:57 crc kubenswrapper[4719]: E1124 08:56:57.766450 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 08:56:57 crc kubenswrapper[4719]: E1124 08:56:57.766730 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plvjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sw8vr_openshift-marketplace(d599ee52-0a8d-4f3b-8ffe-624b8d580382): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:56:57 crc kubenswrapper[4719]: E1124 08:56:57.768003 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-sw8vr" podUID="d599ee52-0a8d-4f3b-8ffe-624b8d580382" Nov 24 08:56:59 crc kubenswrapper[4719]: E1124 08:56:59.973102 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sw8vr" podUID="d599ee52-0a8d-4f3b-8ffe-624b8d580382" Nov 24 08:56:59 crc kubenswrapper[4719]: E1124 08:56:59.973509 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-kb6cr" podUID="752be1f4-8bf3-403b-a203-bae1d69d05bb" Nov 24 08:57:00 crc kubenswrapper[4719]: E1124 08:57:00.197407 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 08:57:00 crc kubenswrapper[4719]: E1124 08:57:00.197546 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ch4nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ljp9t_openshift-marketplace(ff5bf07f-1775-4310-a0b3-5306a4202228): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:57:00 crc kubenswrapper[4719]: E1124 08:57:00.198703 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ljp9t" podUID="ff5bf07f-1775-4310-a0b3-5306a4202228" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.075390 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ljp9t" podUID="ff5bf07f-1775-4310-a0b3-5306a4202228" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.251330 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.251974 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qx8k5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tdvl4_openshift-marketplace(45ec96ae-4756-4249-b370-ce98fbe47db0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.253848 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tdvl4" podUID="45ec96ae-4756-4249-b370-ce98fbe47db0" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.286874 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.287583 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ckjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mjzxt_openshift-marketplace(30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.293502 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mjzxt" podUID="30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" Nov 24 08:57:02 crc kubenswrapper[4719]: I1124 08:57:02.360117 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bzb4s" event={"ID":"f181c2b3-1876-4446-b16e-fbbaba6f7c95","Type":"ContainerStarted","Data":"8f27e31f30c9d82ec0a7e5891ec53a5bcd7aff27221dafe38ce926ea65a8e464"} Nov 24 08:57:02 crc kubenswrapper[4719]: I1124 08:57:02.360683 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bzb4s" Nov 24 08:57:02 crc kubenswrapper[4719]: I1124 08:57:02.360904 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:57:02 crc kubenswrapper[4719]: I1124 08:57:02.360959 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.361689 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tdvl4" podUID="45ec96ae-4756-4249-b370-ce98fbe47db0" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.376029 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mjzxt" podUID="30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.384846 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.385228 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k99vx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nn46q_openshift-marketplace(687a3665-1f60-48cf-ad90-013c77a6fefb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.386378 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nn46q" podUID="687a3665-1f60-48cf-ad90-013c77a6fefb" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.485345 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.485621 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4szrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-nrlw6_openshift-marketplace(d18df24f-85d5-4acf-9469-1bd2c80a3ea6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 08:57:02 crc kubenswrapper[4719]: E1124 08:57:02.487308 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-nrlw6" podUID="d18df24f-85d5-4acf-9469-1bd2c80a3ea6" Nov 24 08:57:02 crc kubenswrapper[4719]: W1124 08:57:02.584917 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-ff78f09e7fd070507a5caeb0acf816b62cc51a78bc2a16eff2f1de42edb118ac WatchSource:0}: Error finding container ff78f09e7fd070507a5caeb0acf816b62cc51a78bc2a16eff2f1de42edb118ac: Status 404 returned error can't find the container with id ff78f09e7fd070507a5caeb0acf816b62cc51a78bc2a16eff2f1de42edb118ac Nov 24 08:57:02 crc kubenswrapper[4719]: I1124 08:57:02.679836 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5hv9d"] Nov 24 08:57:02 crc kubenswrapper[4719]: W1124 08:57:02.690709 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd6beab7_bbb8_4abb_98b1_60c1f8360757.slice/crio-e2d09d5708f7eba94b90253d5233908056ca1e58fa0d3f8e4a6258c25c3d8dbe WatchSource:0}: Error finding container e2d09d5708f7eba94b90253d5233908056ca1e58fa0d3f8e4a6258c25c3d8dbe: Status 404 returned error can't find the container with id e2d09d5708f7eba94b90253d5233908056ca1e58fa0d3f8e4a6258c25c3d8dbe Nov 24 08:57:03 crc kubenswrapper[4719]: I1124 08:57:03.002399 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:57:03 crc kubenswrapper[4719]: I1124 08:57:03.002433 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:57:03 crc kubenswrapper[4719]: I1124 08:57:03.003125 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:57:03 crc kubenswrapper[4719]: I1124 08:57:03.003022 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:57:03 crc kubenswrapper[4719]: I1124 08:57:03.361819 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b8d1e504c524258cf16b63605bd32a6e1c647e03be90b1e47d5434b63f0b8e2b"} Nov 24 08:57:03 crc kubenswrapper[4719]: I1124 08:57:03.362270 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ff78f09e7fd070507a5caeb0acf816b62cc51a78bc2a16eff2f1de42edb118ac"} Nov 24 08:57:03 crc kubenswrapper[4719]: I1124 08:57:03.363513 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" event={"ID":"bd6beab7-bbb8-4abb-98b1-60c1f8360757","Type":"ContainerStarted","Data":"e2d09d5708f7eba94b90253d5233908056ca1e58fa0d3f8e4a6258c25c3d8dbe"} Nov 24 08:57:03 crc kubenswrapper[4719]: I1124 08:57:03.366724 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"05333f033b56a94f0cd929eb5d86d879e3cfd5f5590959e2069998ff7878eac7"} Nov 24 08:57:03 crc kubenswrapper[4719]: I1124 08:57:03.366801 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f531206e73c98647c084e4cec117ed31fb7e809ba43168a7bd23c6d929155563"} Nov 24 08:57:03 crc kubenswrapper[4719]: I1124 08:57:03.384408 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"25bcb4cdb76c4580e17e0029fd297a772a8be5c46c61770b73aca0c37e6760df"} Nov 24 08:57:03 crc kubenswrapper[4719]: I1124 08:57:03.384476 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4be09b4078d140337a4829b2b18b5c4256535808b28c3a19b88d912dcf3f80c0"} Nov 24 08:57:03 crc kubenswrapper[4719]: I1124 08:57:03.386488 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:57:03 crc kubenswrapper[4719]: I1124 08:57:03.387260 4719 patch_prober.go:28] interesting pod/downloads-7954f5f757-bzb4s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 24 08:57:03 crc kubenswrapper[4719]: I1124 08:57:03.387304 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bzb4s" podUID="f181c2b3-1876-4446-b16e-fbbaba6f7c95" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 24 08:57:03 crc kubenswrapper[4719]: E1124 08:57:03.391624 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-nrlw6" podUID="d18df24f-85d5-4acf-9469-1bd2c80a3ea6" Nov 24 08:57:03 crc kubenswrapper[4719]: E1124 08:57:03.391934 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-nn46q" podUID="687a3665-1f60-48cf-ad90-013c77a6fefb" Nov 24 08:57:04 crc kubenswrapper[4719]: I1124 08:57:04.414086 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" event={"ID":"bd6beab7-bbb8-4abb-98b1-60c1f8360757","Type":"ContainerStarted","Data":"5af98eb2746c846b6d69737ee739c760347cd89d6e9430638238eba18de5c52e"} Nov 24 08:57:04 crc kubenswrapper[4719]: I1124 08:57:04.561925 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 08:57:04 crc kubenswrapper[4719]: I1124 08:57:04.561988 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 08:57:04 crc kubenswrapper[4719]: I1124 08:57:04.918550 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g48p5"] Nov 24 08:57:05 crc kubenswrapper[4719]: I1124 08:57:05.421093 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5hv9d" event={"ID":"bd6beab7-bbb8-4abb-98b1-60c1f8360757","Type":"ContainerStarted","Data":"cf06d9f8cdb2afbfa6a12b4f376bfcbce34df7c5e3d16ee907c135c77049dad5"} Nov 24 08:57:05 crc kubenswrapper[4719]: I1124 08:57:05.442828 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5hv9d" podStartSLOduration=180.442810785 podStartE2EDuration="3m0.442810785s" podCreationTimestamp="2025-11-24 08:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:57:05.438251277 +0000 UTC m=+201.769524549" watchObservedRunningTime="2025-11-24 08:57:05.442810785 +0000 UTC m=+201.774084037" Nov 24 08:57:13 crc kubenswrapper[4719]: I1124 08:57:13.007357 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bzb4s" Nov 24 08:57:16 crc kubenswrapper[4719]: I1124 08:57:16.486594 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sw8vr" event={"ID":"d599ee52-0a8d-4f3b-8ffe-624b8d580382","Type":"ContainerStarted","Data":"a7e6e251e004ef58d05d86a3af34c7c0ce4649da47b86f084d6c15a0062e9d6a"} Nov 24 08:57:16 crc kubenswrapper[4719]: I1124 08:57:16.490426 4719 generic.go:334] "Generic (PLEG): container finished" podID="f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" containerID="250a30f3d17134b781a922887d96afbdf80a30c40b7d71cf3bf628514880bc71" exitCode=0 Nov 24 08:57:16 crc kubenswrapper[4719]: I1124 08:57:16.490495 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszz2" event={"ID":"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3","Type":"ContainerDied","Data":"250a30f3d17134b781a922887d96afbdf80a30c40b7d71cf3bf628514880bc71"} Nov 24 08:57:17 crc kubenswrapper[4719]: I1124 08:57:17.509924 4719 generic.go:334] "Generic (PLEG): container finished" podID="d599ee52-0a8d-4f3b-8ffe-624b8d580382" containerID="a7e6e251e004ef58d05d86a3af34c7c0ce4649da47b86f084d6c15a0062e9d6a" exitCode=0 Nov 24 08:57:17 crc kubenswrapper[4719]: I1124 08:57:17.509970 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sw8vr" event={"ID":"d599ee52-0a8d-4f3b-8ffe-624b8d580382","Type":"ContainerDied","Data":"a7e6e251e004ef58d05d86a3af34c7c0ce4649da47b86f084d6c15a0062e9d6a"} Nov 24 08:57:17 crc kubenswrapper[4719]: I1124 08:57:17.512876 4719 generic.go:334] "Generic (PLEG): container finished" podID="ff5bf07f-1775-4310-a0b3-5306a4202228" containerID="84eafe1e6c754505f9ce9d9850acd3bccc2853f1febae1f7b50997de3854604b" exitCode=0 Nov 24 08:57:17 crc kubenswrapper[4719]: I1124 08:57:17.512907 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljp9t" event={"ID":"ff5bf07f-1775-4310-a0b3-5306a4202228","Type":"ContainerDied","Data":"84eafe1e6c754505f9ce9d9850acd3bccc2853f1febae1f7b50997de3854604b"} Nov 24 08:57:27 crc kubenswrapper[4719]: I1124 08:57:27.577542 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sw8vr" event={"ID":"d599ee52-0a8d-4f3b-8ffe-624b8d580382","Type":"ContainerStarted","Data":"f5f89ddd8f79e30e84f64032b84ac4a34ee77643a0c2897859d59d13eaac6cfe"} Nov 24 08:57:27 crc kubenswrapper[4719]: I1124 08:57:27.580813 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdvl4" event={"ID":"45ec96ae-4756-4249-b370-ce98fbe47db0","Type":"ContainerStarted","Data":"13db325656d1a7b8fba0fd23d305c8a316caf14e5331449ba517f950eca6753c"} Nov 24 08:57:27 crc kubenswrapper[4719]: I1124 08:57:27.583792 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjzxt" event={"ID":"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5","Type":"ContainerStarted","Data":"165ff26abbeae2df3951236ff1a0183d540bf8a09fcf9e1d8bc295794872458e"} Nov 24 08:57:27 crc kubenswrapper[4719]: I1124 08:57:27.586540 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljp9t" event={"ID":"ff5bf07f-1775-4310-a0b3-5306a4202228","Type":"ContainerStarted","Data":"286aaee7f23359b74832468af13d2eae62261bcee73ea866f33e79e2c690493b"} Nov 24 08:57:27 crc kubenswrapper[4719]: I1124 08:57:27.589555 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszz2" event={"ID":"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3","Type":"ContainerStarted","Data":"f6d9d144552ddb6da81cf6eebc7ca32dc443cf0bf0d0a61363adb2b30f3f6b73"} Nov 24 08:57:27 crc kubenswrapper[4719]: I1124 08:57:27.591985 4719 generic.go:334] "Generic (PLEG): container finished" podID="687a3665-1f60-48cf-ad90-013c77a6fefb" containerID="61f6f0a0a272ac08d9f031d6b526ed94457b7578448bf3da26c1e9c9064dfb75" exitCode=0 Nov 24 08:57:27 crc kubenswrapper[4719]: I1124 08:57:27.592050 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nn46q" event={"ID":"687a3665-1f60-48cf-ad90-013c77a6fefb","Type":"ContainerDied","Data":"61f6f0a0a272ac08d9f031d6b526ed94457b7578448bf3da26c1e9c9064dfb75"} Nov 24 08:57:27 crc kubenswrapper[4719]: I1124 08:57:27.594004 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb6cr" event={"ID":"752be1f4-8bf3-403b-a203-bae1d69d05bb","Type":"ContainerStarted","Data":"0e16f36bc2241cd23eb6ff40488bb260c59807ea3b86cd0340ebf62a8c628c93"} Nov 24 08:57:27 crc kubenswrapper[4719]: I1124 08:57:27.603604 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sw8vr" podStartSLOduration=3.099886042 podStartE2EDuration="1m30.603590086s" podCreationTimestamp="2025-11-24 08:55:57 +0000 UTC" firstStartedPulling="2025-11-24 08:55:59.616116099 +0000 UTC m=+135.947389351" lastFinishedPulling="2025-11-24 08:57:27.119820143 +0000 UTC m=+223.451093395" observedRunningTime="2025-11-24 08:57:27.601725729 +0000 UTC m=+223.932998971" watchObservedRunningTime="2025-11-24 08:57:27.603590086 +0000 UTC m=+223.934863338" Nov 24 08:57:27 crc kubenswrapper[4719]: I1124 08:57:27.635917 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ljp9t" podStartSLOduration=4.943116567 podStartE2EDuration="1m33.635903073s" podCreationTimestamp="2025-11-24 08:55:54 +0000 UTC" firstStartedPulling="2025-11-24 08:55:58.428482201 +0000 UTC m=+134.759755453" lastFinishedPulling="2025-11-24 08:57:27.121268707 +0000 UTC m=+223.452541959" observedRunningTime="2025-11-24 08:57:27.63479363 +0000 UTC m=+223.966066892" watchObservedRunningTime="2025-11-24 08:57:27.635903073 +0000 UTC m=+223.967176325" Nov 24 08:57:27 crc kubenswrapper[4719]: I1124 08:57:27.670241 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lszz2" podStartSLOduration=4.048172118 podStartE2EDuration="1m31.670219711s" podCreationTimestamp="2025-11-24 08:55:56 +0000 UTC" firstStartedPulling="2025-11-24 08:55:59.489730757 +0000 UTC m=+135.821004009" lastFinishedPulling="2025-11-24 08:57:27.11177835 +0000 UTC m=+223.443051602" observedRunningTime="2025-11-24 08:57:27.669160129 +0000 UTC m=+224.000433401" watchObservedRunningTime="2025-11-24 08:57:27.670219711 +0000 UTC m=+224.001492963" Nov 24 08:57:28 crc kubenswrapper[4719]: I1124 08:57:28.600653 4719 generic.go:334] "Generic (PLEG): container finished" podID="30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" containerID="165ff26abbeae2df3951236ff1a0183d540bf8a09fcf9e1d8bc295794872458e" exitCode=0 Nov 24 08:57:28 crc kubenswrapper[4719]: I1124 08:57:28.600726 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjzxt" event={"ID":"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5","Type":"ContainerDied","Data":"165ff26abbeae2df3951236ff1a0183d540bf8a09fcf9e1d8bc295794872458e"} Nov 24 08:57:28 crc kubenswrapper[4719]: I1124 08:57:28.603788 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nn46q" event={"ID":"687a3665-1f60-48cf-ad90-013c77a6fefb","Type":"ContainerStarted","Data":"c6cfff1d6e7819a678aa1892ca51220cb2e092024f5ae989784ee1108aff2d5c"} Nov 24 08:57:28 crc kubenswrapper[4719]: I1124 08:57:28.607938 4719 generic.go:334] "Generic (PLEG): container finished" podID="752be1f4-8bf3-403b-a203-bae1d69d05bb" containerID="0e16f36bc2241cd23eb6ff40488bb260c59807ea3b86cd0340ebf62a8c628c93" exitCode=0 Nov 24 08:57:28 crc kubenswrapper[4719]: I1124 08:57:28.608005 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb6cr" event={"ID":"752be1f4-8bf3-403b-a203-bae1d69d05bb","Type":"ContainerDied","Data":"0e16f36bc2241cd23eb6ff40488bb260c59807ea3b86cd0340ebf62a8c628c93"} Nov 24 08:57:28 crc kubenswrapper[4719]: I1124 08:57:28.610503 4719 generic.go:334] "Generic (PLEG): container finished" podID="d18df24f-85d5-4acf-9469-1bd2c80a3ea6" containerID="c1fc2dbf2e4178a5ad450a98a0c1085f5b05c236292a20ebaf45134a73d7d883" exitCode=0 Nov 24 08:57:28 crc kubenswrapper[4719]: I1124 08:57:28.610551 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrlw6" event={"ID":"d18df24f-85d5-4acf-9469-1bd2c80a3ea6","Type":"ContainerDied","Data":"c1fc2dbf2e4178a5ad450a98a0c1085f5b05c236292a20ebaf45134a73d7d883"} Nov 24 08:57:28 crc kubenswrapper[4719]: I1124 08:57:28.615169 4719 generic.go:334] "Generic (PLEG): container finished" podID="45ec96ae-4756-4249-b370-ce98fbe47db0" containerID="13db325656d1a7b8fba0fd23d305c8a316caf14e5331449ba517f950eca6753c" exitCode=0 Nov 24 08:57:28 crc kubenswrapper[4719]: I1124 08:57:28.615203 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdvl4" event={"ID":"45ec96ae-4756-4249-b370-ce98fbe47db0","Type":"ContainerDied","Data":"13db325656d1a7b8fba0fd23d305c8a316caf14e5331449ba517f950eca6753c"} Nov 24 08:57:28 crc kubenswrapper[4719]: I1124 08:57:28.671093 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nn46q" podStartSLOduration=4.020645271 podStartE2EDuration="1m32.671068492s" podCreationTimestamp="2025-11-24 08:55:56 +0000 UTC" firstStartedPulling="2025-11-24 08:55:59.51872815 +0000 UTC m=+135.850001402" lastFinishedPulling="2025-11-24 08:57:28.169151371 +0000 UTC m=+224.500424623" observedRunningTime="2025-11-24 08:57:28.669486454 +0000 UTC m=+225.000759716" watchObservedRunningTime="2025-11-24 08:57:28.671068492 +0000 UTC m=+225.002341744" Nov 24 08:57:29 crc kubenswrapper[4719]: I1124 08:57:29.623258 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdvl4" event={"ID":"45ec96ae-4756-4249-b370-ce98fbe47db0","Type":"ContainerStarted","Data":"7716ce667c96bf2149faab4efad1ba8a99d88c43c957ae9f24b9827cd971edab"} Nov 24 08:57:29 crc kubenswrapper[4719]: I1124 08:57:29.625161 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjzxt" event={"ID":"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5","Type":"ContainerStarted","Data":"0c64c0d0e7cb9c67659683fd409c5df380d4306bad73f135542bbbac665a4a21"} Nov 24 08:57:29 crc kubenswrapper[4719]: I1124 08:57:29.627002 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb6cr" event={"ID":"752be1f4-8bf3-403b-a203-bae1d69d05bb","Type":"ContainerStarted","Data":"45b5133b06d0595ae9cae8e85ae12234f8cecfa2b3228b7d38fa7f60fd03af5c"} Nov 24 08:57:29 crc kubenswrapper[4719]: I1124 08:57:29.628743 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrlw6" event={"ID":"d18df24f-85d5-4acf-9469-1bd2c80a3ea6","Type":"ContainerStarted","Data":"9328e7c3e5f0ba5a5854d7dd3075dc33c06f2198fafa65d25cbf8ee0bef9e656"} Nov 24 08:57:29 crc kubenswrapper[4719]: I1124 08:57:29.645720 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tdvl4" podStartSLOduration=4.997258058 podStartE2EDuration="1m35.645703712s" podCreationTimestamp="2025-11-24 08:55:54 +0000 UTC" firstStartedPulling="2025-11-24 08:55:58.400879609 +0000 UTC m=+134.732152861" lastFinishedPulling="2025-11-24 08:57:29.049325263 +0000 UTC m=+225.380598515" observedRunningTime="2025-11-24 08:57:29.643355221 +0000 UTC m=+225.974628493" watchObservedRunningTime="2025-11-24 08:57:29.645703712 +0000 UTC m=+225.976976964" Nov 24 08:57:29 crc kubenswrapper[4719]: I1124 08:57:29.665256 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mjzxt" podStartSLOduration=4.825736063 podStartE2EDuration="1m36.665237733s" podCreationTimestamp="2025-11-24 08:55:53 +0000 UTC" firstStartedPulling="2025-11-24 08:55:57.224067484 +0000 UTC m=+133.555340736" lastFinishedPulling="2025-11-24 08:57:29.063569154 +0000 UTC m=+225.394842406" observedRunningTime="2025-11-24 08:57:29.663621354 +0000 UTC m=+225.994894626" watchObservedRunningTime="2025-11-24 08:57:29.665237733 +0000 UTC m=+225.996510985" Nov 24 08:57:29 crc kubenswrapper[4719]: I1124 08:57:29.708801 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kb6cr" podStartSLOduration=3.267076231 podStartE2EDuration="1m32.708761899s" podCreationTimestamp="2025-11-24 08:55:57 +0000 UTC" firstStartedPulling="2025-11-24 08:55:59.548820076 +0000 UTC m=+135.880093328" lastFinishedPulling="2025-11-24 08:57:28.990505744 +0000 UTC m=+225.321778996" observedRunningTime="2025-11-24 08:57:29.705557332 +0000 UTC m=+226.036830614" watchObservedRunningTime="2025-11-24 08:57:29.708761899 +0000 UTC m=+226.040035171" Nov 24 08:57:29 crc kubenswrapper[4719]: I1124 08:57:29.709274 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nrlw6" podStartSLOduration=4.778474016 podStartE2EDuration="1m35.709268354s" podCreationTimestamp="2025-11-24 08:55:54 +0000 UTC" firstStartedPulling="2025-11-24 08:55:58.346209742 +0000 UTC m=+134.677482994" lastFinishedPulling="2025-11-24 08:57:29.27700408 +0000 UTC m=+225.608277332" observedRunningTime="2025-11-24 08:57:29.690220958 +0000 UTC m=+226.021494230" watchObservedRunningTime="2025-11-24 08:57:29.709268354 +0000 UTC m=+226.040541616" Nov 24 08:57:29 crc kubenswrapper[4719]: I1124 08:57:29.959849 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" podUID="6818985a-ffd6-4447-bafe-624296df6660" containerName="oauth-openshift" containerID="cri-o://16dceb70b312736bb87597cad8d04091c4899c682fde17d95a2c09ba2d389f65" gracePeriod=15 Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.326295 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.371181 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-57866998d-dczrl"] Nov 24 08:57:30 crc kubenswrapper[4719]: E1124 08:57:30.371397 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6818985a-ffd6-4447-bafe-624296df6660" containerName="oauth-openshift" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.371410 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="6818985a-ffd6-4447-bafe-624296df6660" containerName="oauth-openshift" Nov 24 08:57:30 crc kubenswrapper[4719]: E1124 08:57:30.371422 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b74b9b-50d6-454d-b527-a5980f7d762e" containerName="collect-profiles" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.371428 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b74b9b-50d6-454d-b527-a5980f7d762e" containerName="collect-profiles" Nov 24 08:57:30 crc kubenswrapper[4719]: E1124 08:57:30.371437 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76df8267-a4e8-4b23-8d9d-6d0c957929cc" containerName="pruner" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.371445 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76df8267-a4e8-4b23-8d9d-6d0c957929cc" containerName="pruner" Nov 24 08:57:30 crc kubenswrapper[4719]: E1124 08:57:30.371466 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d62168-687f-4a81-a68e-b2f0a4323967" containerName="pruner" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.371476 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d62168-687f-4a81-a68e-b2f0a4323967" containerName="pruner" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.371578 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b74b9b-50d6-454d-b527-a5980f7d762e" containerName="collect-profiles" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.371588 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="6818985a-ffd6-4447-bafe-624296df6660" containerName="oauth-openshift" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.371599 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76df8267-a4e8-4b23-8d9d-6d0c957929cc" containerName="pruner" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.371609 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d62168-687f-4a81-a68e-b2f0a4323967" containerName="pruner" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.371997 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.381127 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-trusted-ca-bundle\") pod \"6818985a-ffd6-4447-bafe-624296df6660\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.381187 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-idp-0-file-data\") pod \"6818985a-ffd6-4447-bafe-624296df6660\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.381216 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-router-certs\") pod \"6818985a-ffd6-4447-bafe-624296df6660\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.381240 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-login\") pod \"6818985a-ffd6-4447-bafe-624296df6660\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.381271 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thxgp\" (UniqueName: \"kubernetes.io/projected/6818985a-ffd6-4447-bafe-624296df6660-kube-api-access-thxgp\") pod \"6818985a-ffd6-4447-bafe-624296df6660\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.381349 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.381382 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-session\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.382081 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6818985a-ffd6-4447-bafe-624296df6660" (UID: "6818985a-ffd6-4447-bafe-624296df6660"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.382194 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98febf7b-0d13-49fa-bb94-b5ee68e83b45-audit-dir\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.382273 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.382337 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.382374 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.382477 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-service-ca\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.382513 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-user-template-error\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.382570 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.382619 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-user-template-login\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.382650 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.382672 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-router-certs\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.382733 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxdv8\" (UniqueName: \"kubernetes.io/projected/98febf7b-0d13-49fa-bb94-b5ee68e83b45-kube-api-access-zxdv8\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.382757 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98febf7b-0d13-49fa-bb94-b5ee68e83b45-audit-policies\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.382813 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.392644 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6818985a-ffd6-4447-bafe-624296df6660" (UID: "6818985a-ffd6-4447-bafe-624296df6660"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.399675 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-57866998d-dczrl"] Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.404626 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6818985a-ffd6-4447-bafe-624296df6660-kube-api-access-thxgp" (OuterVolumeSpecName: "kube-api-access-thxgp") pod "6818985a-ffd6-4447-bafe-624296df6660" (UID: "6818985a-ffd6-4447-bafe-624296df6660"). InnerVolumeSpecName "kube-api-access-thxgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.407182 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6818985a-ffd6-4447-bafe-624296df6660" (UID: "6818985a-ffd6-4447-bafe-624296df6660"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.426137 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6818985a-ffd6-4447-bafe-624296df6660" (UID: "6818985a-ffd6-4447-bafe-624296df6660"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.483994 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-audit-policies\") pod \"6818985a-ffd6-4447-bafe-624296df6660\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484138 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-serving-cert\") pod \"6818985a-ffd6-4447-bafe-624296df6660\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484167 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6818985a-ffd6-4447-bafe-624296df6660-audit-dir\") pod \"6818985a-ffd6-4447-bafe-624296df6660\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484187 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-service-ca\") pod \"6818985a-ffd6-4447-bafe-624296df6660\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484212 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-error\") pod \"6818985a-ffd6-4447-bafe-624296df6660\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484230 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-provider-selection\") pod \"6818985a-ffd6-4447-bafe-624296df6660\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484253 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6818985a-ffd6-4447-bafe-624296df6660-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6818985a-ffd6-4447-bafe-624296df6660" (UID: "6818985a-ffd6-4447-bafe-624296df6660"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484273 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-cliconfig\") pod \"6818985a-ffd6-4447-bafe-624296df6660\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484364 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-ocp-branding-template\") pod \"6818985a-ffd6-4447-bafe-624296df6660\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484426 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-session\") pod \"6818985a-ffd6-4447-bafe-624296df6660\" (UID: \"6818985a-ffd6-4447-bafe-624296df6660\") " Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484584 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-session\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484595 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6818985a-ffd6-4447-bafe-624296df6660" (UID: "6818985a-ffd6-4447-bafe-624296df6660"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484612 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98febf7b-0d13-49fa-bb94-b5ee68e83b45-audit-dir\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484648 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484695 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484718 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484756 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-service-ca\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484777 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-user-template-error\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484791 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6818985a-ffd6-4447-bafe-624296df6660" (UID: "6818985a-ffd6-4447-bafe-624296df6660"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484825 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484860 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-user-template-login\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484882 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484898 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-router-certs\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484944 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxdv8\" (UniqueName: \"kubernetes.io/projected/98febf7b-0d13-49fa-bb94-b5ee68e83b45-kube-api-access-zxdv8\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484961 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98febf7b-0d13-49fa-bb94-b5ee68e83b45-audit-policies\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.484982 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.485022 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.485047 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.485058 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thxgp\" (UniqueName: \"kubernetes.io/projected/6818985a-ffd6-4447-bafe-624296df6660-kube-api-access-thxgp\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.485068 4719 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.485077 4719 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6818985a-ffd6-4447-bafe-624296df6660-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.485085 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.485094 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.485849 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.486332 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6818985a-ffd6-4447-bafe-624296df6660" (UID: "6818985a-ffd6-4447-bafe-624296df6660"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.488102 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/98febf7b-0d13-49fa-bb94-b5ee68e83b45-audit-dir\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.493450 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/98febf7b-0d13-49fa-bb94-b5ee68e83b45-audit-policies\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.493502 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6818985a-ffd6-4447-bafe-624296df6660" (UID: "6818985a-ffd6-4447-bafe-624296df6660"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.494730 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.494970 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6818985a-ffd6-4447-bafe-624296df6660" (UID: "6818985a-ffd6-4447-bafe-624296df6660"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.496456 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-user-template-login\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.497715 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-router-certs\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.498681 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6818985a-ffd6-4447-bafe-624296df6660" (UID: "6818985a-ffd6-4447-bafe-624296df6660"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.499356 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.499827 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.501392 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-service-ca\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.501744 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6818985a-ffd6-4447-bafe-624296df6660" (UID: "6818985a-ffd6-4447-bafe-624296df6660"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.502822 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-session\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.504362 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-user-template-error\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.504817 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6818985a-ffd6-4447-bafe-624296df6660" (UID: "6818985a-ffd6-4447-bafe-624296df6660"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.506144 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.510808 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/98febf7b-0d13-49fa-bb94-b5ee68e83b45-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.513545 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxdv8\" (UniqueName: \"kubernetes.io/projected/98febf7b-0d13-49fa-bb94-b5ee68e83b45-kube-api-access-zxdv8\") pod \"oauth-openshift-57866998d-dczrl\" (UID: \"98febf7b-0d13-49fa-bb94-b5ee68e83b45\") " pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.586697 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.586903 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.586999 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.587079 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.587151 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.587232 4719 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6818985a-ffd6-4447-bafe-624296df6660-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.635577 4719 generic.go:334] "Generic (PLEG): container finished" podID="6818985a-ffd6-4447-bafe-624296df6660" containerID="16dceb70b312736bb87597cad8d04091c4899c682fde17d95a2c09ba2d389f65" exitCode=0 Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.635620 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" event={"ID":"6818985a-ffd6-4447-bafe-624296df6660","Type":"ContainerDied","Data":"16dceb70b312736bb87597cad8d04091c4899c682fde17d95a2c09ba2d389f65"} Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.635655 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" event={"ID":"6818985a-ffd6-4447-bafe-624296df6660","Type":"ContainerDied","Data":"8004a49debda73b8f5c6bc495014824a1e471c97c5eaa35825f6e2e2e1caaab4"} Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.635676 4719 scope.go:117] "RemoveContainer" containerID="16dceb70b312736bb87597cad8d04091c4899c682fde17d95a2c09ba2d389f65" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.635699 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g48p5" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.651001 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g48p5"] Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.653899 4719 scope.go:117] "RemoveContainer" containerID="16dceb70b312736bb87597cad8d04091c4899c682fde17d95a2c09ba2d389f65" Nov 24 08:57:30 crc kubenswrapper[4719]: E1124 08:57:30.654465 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16dceb70b312736bb87597cad8d04091c4899c682fde17d95a2c09ba2d389f65\": container with ID starting with 16dceb70b312736bb87597cad8d04091c4899c682fde17d95a2c09ba2d389f65 not found: ID does not exist" containerID="16dceb70b312736bb87597cad8d04091c4899c682fde17d95a2c09ba2d389f65" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.654568 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16dceb70b312736bb87597cad8d04091c4899c682fde17d95a2c09ba2d389f65"} err="failed to get container status \"16dceb70b312736bb87597cad8d04091c4899c682fde17d95a2c09ba2d389f65\": rpc error: code = NotFound desc = could not find container \"16dceb70b312736bb87597cad8d04091c4899c682fde17d95a2c09ba2d389f65\": container with ID starting with 16dceb70b312736bb87597cad8d04091c4899c682fde17d95a2c09ba2d389f65 not found: ID does not exist" Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.655402 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g48p5"] Nov 24 08:57:30 crc kubenswrapper[4719]: I1124 08:57:30.692144 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:31 crc kubenswrapper[4719]: I1124 08:57:31.042671 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-57866998d-dczrl"] Nov 24 08:57:31 crc kubenswrapper[4719]: W1124 08:57:31.055562 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98febf7b_0d13_49fa_bb94_b5ee68e83b45.slice/crio-da5cf690600e8c51cd5aceae864c771450547464fdf2c08b19e99f2828b2e548 WatchSource:0}: Error finding container da5cf690600e8c51cd5aceae864c771450547464fdf2c08b19e99f2828b2e548: Status 404 returned error can't find the container with id da5cf690600e8c51cd5aceae864c771450547464fdf2c08b19e99f2828b2e548 Nov 24 08:57:31 crc kubenswrapper[4719]: I1124 08:57:31.642483 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-57866998d-dczrl" event={"ID":"98febf7b-0d13-49fa-bb94-b5ee68e83b45","Type":"ContainerStarted","Data":"da22ab504763afc3bd4ddfbd6140986a73427665e2c39ae7ad5e90c471efaf3f"} Nov 24 08:57:31 crc kubenswrapper[4719]: I1124 08:57:31.642525 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-57866998d-dczrl" event={"ID":"98febf7b-0d13-49fa-bb94-b5ee68e83b45","Type":"ContainerStarted","Data":"da5cf690600e8c51cd5aceae864c771450547464fdf2c08b19e99f2828b2e548"} Nov 24 08:57:31 crc kubenswrapper[4719]: I1124 08:57:31.644702 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:32 crc kubenswrapper[4719]: I1124 08:57:32.244937 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-57866998d-dczrl" Nov 24 08:57:32 crc kubenswrapper[4719]: I1124 08:57:32.266333 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-57866998d-dczrl" podStartSLOduration=27.266309295 podStartE2EDuration="27.266309295s" podCreationTimestamp="2025-11-24 08:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:57:31.678331921 +0000 UTC m=+228.009605193" watchObservedRunningTime="2025-11-24 08:57:32.266309295 +0000 UTC m=+228.597582547" Nov 24 08:57:32 crc kubenswrapper[4719]: I1124 08:57:32.529446 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6818985a-ffd6-4447-bafe-624296df6660" path="/var/lib/kubelet/pods/6818985a-ffd6-4447-bafe-624296df6660/volumes" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.040692 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.479164 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.479500 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.555245 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.555315 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.567602 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.567930 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.568103 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.578711 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.579108 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c" gracePeriod=600 Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.709103 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.709155 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.921775 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.921836 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.972940 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.973045 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.974690 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:57:34 crc kubenswrapper[4719]: I1124 08:57:34.976639 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:57:35 crc kubenswrapper[4719]: I1124 08:57:35.013889 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:57:35 crc kubenswrapper[4719]: I1124 08:57:35.667930 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c" exitCode=0 Nov 24 08:57:35 crc kubenswrapper[4719]: I1124 08:57:35.669715 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c"} Nov 24 08:57:35 crc kubenswrapper[4719]: I1124 08:57:35.669759 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"59c0a5952ea2845b8905fda1f05065d95523ac4e448325b0905c9139c8ad7b5a"} Nov 24 08:57:35 crc kubenswrapper[4719]: I1124 08:57:35.731202 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:57:35 crc kubenswrapper[4719]: I1124 08:57:35.732884 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:57:35 crc kubenswrapper[4719]: I1124 08:57:35.736385 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:57:36 crc kubenswrapper[4719]: I1124 08:57:36.880997 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nrlw6"] Nov 24 08:57:36 crc kubenswrapper[4719]: I1124 08:57:36.897635 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:57:36 crc kubenswrapper[4719]: I1124 08:57:36.897677 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:57:36 crc kubenswrapper[4719]: I1124 08:57:36.947001 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:57:36 crc kubenswrapper[4719]: I1124 08:57:36.980566 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:57:36 crc kubenswrapper[4719]: I1124 08:57:36.980677 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:57:37 crc kubenswrapper[4719]: I1124 08:57:37.029402 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:57:37 crc kubenswrapper[4719]: I1124 08:57:37.480294 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:57:37 crc kubenswrapper[4719]: I1124 08:57:37.481495 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:57:37 crc kubenswrapper[4719]: I1124 08:57:37.520254 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:57:37 crc kubenswrapper[4719]: I1124 08:57:37.683016 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nrlw6" podUID="d18df24f-85d5-4acf-9469-1bd2c80a3ea6" containerName="registry-server" containerID="cri-o://9328e7c3e5f0ba5a5854d7dd3075dc33c06f2198fafa65d25cbf8ee0bef9e656" gracePeriod=2 Nov 24 08:57:37 crc kubenswrapper[4719]: I1124 08:57:37.731386 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:57:37 crc kubenswrapper[4719]: I1124 08:57:37.739823 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:57:37 crc kubenswrapper[4719]: I1124 08:57:37.746057 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:57:37 crc kubenswrapper[4719]: I1124 08:57:37.882531 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tdvl4"] Nov 24 08:57:37 crc kubenswrapper[4719]: I1124 08:57:37.882784 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tdvl4" podUID="45ec96ae-4756-4249-b370-ce98fbe47db0" containerName="registry-server" containerID="cri-o://7716ce667c96bf2149faab4efad1ba8a99d88c43c957ae9f24b9827cd971edab" gracePeriod=2 Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.080816 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.113306 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.113357 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.114302 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-catalog-content\") pod \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\" (UID: \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\") " Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.114329 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4szrw\" (UniqueName: \"kubernetes.io/projected/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-kube-api-access-4szrw\") pod \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\" (UID: \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\") " Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.114441 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-utilities\") pod \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\" (UID: \"d18df24f-85d5-4acf-9469-1bd2c80a3ea6\") " Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.115434 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-utilities" (OuterVolumeSpecName: "utilities") pod "d18df24f-85d5-4acf-9469-1bd2c80a3ea6" (UID: "d18df24f-85d5-4acf-9469-1bd2c80a3ea6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.124382 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-kube-api-access-4szrw" (OuterVolumeSpecName: "kube-api-access-4szrw") pod "d18df24f-85d5-4acf-9469-1bd2c80a3ea6" (UID: "d18df24f-85d5-4acf-9469-1bd2c80a3ea6"). InnerVolumeSpecName "kube-api-access-4szrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.178478 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.178565 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d18df24f-85d5-4acf-9469-1bd2c80a3ea6" (UID: "d18df24f-85d5-4acf-9469-1bd2c80a3ea6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.215272 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.215308 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.215327 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4szrw\" (UniqueName: \"kubernetes.io/projected/d18df24f-85d5-4acf-9469-1bd2c80a3ea6-kube-api-access-4szrw\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.270999 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.316418 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ec96ae-4756-4249-b370-ce98fbe47db0-catalog-content\") pod \"45ec96ae-4756-4249-b370-ce98fbe47db0\" (UID: \"45ec96ae-4756-4249-b370-ce98fbe47db0\") " Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.316477 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx8k5\" (UniqueName: \"kubernetes.io/projected/45ec96ae-4756-4249-b370-ce98fbe47db0-kube-api-access-qx8k5\") pod \"45ec96ae-4756-4249-b370-ce98fbe47db0\" (UID: \"45ec96ae-4756-4249-b370-ce98fbe47db0\") " Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.316522 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ec96ae-4756-4249-b370-ce98fbe47db0-utilities\") pod \"45ec96ae-4756-4249-b370-ce98fbe47db0\" (UID: \"45ec96ae-4756-4249-b370-ce98fbe47db0\") " Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.317308 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ec96ae-4756-4249-b370-ce98fbe47db0-utilities" (OuterVolumeSpecName: "utilities") pod "45ec96ae-4756-4249-b370-ce98fbe47db0" (UID: "45ec96ae-4756-4249-b370-ce98fbe47db0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.323009 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ec96ae-4756-4249-b370-ce98fbe47db0-kube-api-access-qx8k5" (OuterVolumeSpecName: "kube-api-access-qx8k5") pod "45ec96ae-4756-4249-b370-ce98fbe47db0" (UID: "45ec96ae-4756-4249-b370-ce98fbe47db0"). InnerVolumeSpecName "kube-api-access-qx8k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.374344 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ec96ae-4756-4249-b370-ce98fbe47db0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45ec96ae-4756-4249-b370-ce98fbe47db0" (UID: "45ec96ae-4756-4249-b370-ce98fbe47db0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.418506 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45ec96ae-4756-4249-b370-ce98fbe47db0-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.418552 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45ec96ae-4756-4249-b370-ce98fbe47db0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.418565 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx8k5\" (UniqueName: \"kubernetes.io/projected/45ec96ae-4756-4249-b370-ce98fbe47db0-kube-api-access-qx8k5\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.690110 4719 generic.go:334] "Generic (PLEG): container finished" podID="d18df24f-85d5-4acf-9469-1bd2c80a3ea6" containerID="9328e7c3e5f0ba5a5854d7dd3075dc33c06f2198fafa65d25cbf8ee0bef9e656" exitCode=0 Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.690197 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrlw6" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.690219 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrlw6" event={"ID":"d18df24f-85d5-4acf-9469-1bd2c80a3ea6","Type":"ContainerDied","Data":"9328e7c3e5f0ba5a5854d7dd3075dc33c06f2198fafa65d25cbf8ee0bef9e656"} Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.690287 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrlw6" event={"ID":"d18df24f-85d5-4acf-9469-1bd2c80a3ea6","Type":"ContainerDied","Data":"52d247be6023b2dd2745caca216ab35c7c162933b2f292ddb97ec844a2480d5a"} Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.690315 4719 scope.go:117] "RemoveContainer" containerID="9328e7c3e5f0ba5a5854d7dd3075dc33c06f2198fafa65d25cbf8ee0bef9e656" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.698698 4719 generic.go:334] "Generic (PLEG): container finished" podID="45ec96ae-4756-4249-b370-ce98fbe47db0" containerID="7716ce667c96bf2149faab4efad1ba8a99d88c43c957ae9f24b9827cd971edab" exitCode=0 Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.699711 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tdvl4" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.700257 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdvl4" event={"ID":"45ec96ae-4756-4249-b370-ce98fbe47db0","Type":"ContainerDied","Data":"7716ce667c96bf2149faab4efad1ba8a99d88c43c957ae9f24b9827cd971edab"} Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.700406 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdvl4" event={"ID":"45ec96ae-4756-4249-b370-ce98fbe47db0","Type":"ContainerDied","Data":"d4618ccafa62b53b7a20dbdbf61c7311e47558325303c4a1e068e81901cdbad4"} Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.714961 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nrlw6"] Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.725574 4719 scope.go:117] "RemoveContainer" containerID="c1fc2dbf2e4178a5ad450a98a0c1085f5b05c236292a20ebaf45134a73d7d883" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.732231 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nrlw6"] Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.741406 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tdvl4"] Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.744910 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tdvl4"] Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.748184 4719 scope.go:117] "RemoveContainer" containerID="bff726acffc37713ded171a14f983862cdb7ac857f1eaebd0efce4e48c743e14" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.758749 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.770688 4719 scope.go:117] "RemoveContainer" containerID="9328e7c3e5f0ba5a5854d7dd3075dc33c06f2198fafa65d25cbf8ee0bef9e656" Nov 24 08:57:38 crc kubenswrapper[4719]: E1124 08:57:38.771117 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9328e7c3e5f0ba5a5854d7dd3075dc33c06f2198fafa65d25cbf8ee0bef9e656\": container with ID starting with 9328e7c3e5f0ba5a5854d7dd3075dc33c06f2198fafa65d25cbf8ee0bef9e656 not found: ID does not exist" containerID="9328e7c3e5f0ba5a5854d7dd3075dc33c06f2198fafa65d25cbf8ee0bef9e656" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.775027 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9328e7c3e5f0ba5a5854d7dd3075dc33c06f2198fafa65d25cbf8ee0bef9e656"} err="failed to get container status \"9328e7c3e5f0ba5a5854d7dd3075dc33c06f2198fafa65d25cbf8ee0bef9e656\": rpc error: code = NotFound desc = could not find container \"9328e7c3e5f0ba5a5854d7dd3075dc33c06f2198fafa65d25cbf8ee0bef9e656\": container with ID starting with 9328e7c3e5f0ba5a5854d7dd3075dc33c06f2198fafa65d25cbf8ee0bef9e656 not found: ID does not exist" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.775096 4719 scope.go:117] "RemoveContainer" containerID="c1fc2dbf2e4178a5ad450a98a0c1085f5b05c236292a20ebaf45134a73d7d883" Nov 24 08:57:38 crc kubenswrapper[4719]: E1124 08:57:38.776429 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1fc2dbf2e4178a5ad450a98a0c1085f5b05c236292a20ebaf45134a73d7d883\": container with ID starting with c1fc2dbf2e4178a5ad450a98a0c1085f5b05c236292a20ebaf45134a73d7d883 not found: ID does not exist" containerID="c1fc2dbf2e4178a5ad450a98a0c1085f5b05c236292a20ebaf45134a73d7d883" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.776477 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1fc2dbf2e4178a5ad450a98a0c1085f5b05c236292a20ebaf45134a73d7d883"} err="failed to get container status \"c1fc2dbf2e4178a5ad450a98a0c1085f5b05c236292a20ebaf45134a73d7d883\": rpc error: code = NotFound desc = could not find container \"c1fc2dbf2e4178a5ad450a98a0c1085f5b05c236292a20ebaf45134a73d7d883\": container with ID starting with c1fc2dbf2e4178a5ad450a98a0c1085f5b05c236292a20ebaf45134a73d7d883 not found: ID does not exist" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.776519 4719 scope.go:117] "RemoveContainer" containerID="bff726acffc37713ded171a14f983862cdb7ac857f1eaebd0efce4e48c743e14" Nov 24 08:57:38 crc kubenswrapper[4719]: E1124 08:57:38.777126 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff726acffc37713ded171a14f983862cdb7ac857f1eaebd0efce4e48c743e14\": container with ID starting with bff726acffc37713ded171a14f983862cdb7ac857f1eaebd0efce4e48c743e14 not found: ID does not exist" containerID="bff726acffc37713ded171a14f983862cdb7ac857f1eaebd0efce4e48c743e14" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.777153 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff726acffc37713ded171a14f983862cdb7ac857f1eaebd0efce4e48c743e14"} err="failed to get container status \"bff726acffc37713ded171a14f983862cdb7ac857f1eaebd0efce4e48c743e14\": rpc error: code = NotFound desc = could not find container \"bff726acffc37713ded171a14f983862cdb7ac857f1eaebd0efce4e48c743e14\": container with ID starting with bff726acffc37713ded171a14f983862cdb7ac857f1eaebd0efce4e48c743e14 not found: ID does not exist" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.777169 4719 scope.go:117] "RemoveContainer" containerID="7716ce667c96bf2149faab4efad1ba8a99d88c43c957ae9f24b9827cd971edab" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.792209 4719 scope.go:117] "RemoveContainer" containerID="13db325656d1a7b8fba0fd23d305c8a316caf14e5331449ba517f950eca6753c" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.823819 4719 scope.go:117] "RemoveContainer" containerID="8d229ccfea1c8ab84e854f4ff482d51827855c705a18bdd78ee14b6c16862deb" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.841601 4719 scope.go:117] "RemoveContainer" containerID="7716ce667c96bf2149faab4efad1ba8a99d88c43c957ae9f24b9827cd971edab" Nov 24 08:57:38 crc kubenswrapper[4719]: E1124 08:57:38.842693 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7716ce667c96bf2149faab4efad1ba8a99d88c43c957ae9f24b9827cd971edab\": container with ID starting with 7716ce667c96bf2149faab4efad1ba8a99d88c43c957ae9f24b9827cd971edab not found: ID does not exist" containerID="7716ce667c96bf2149faab4efad1ba8a99d88c43c957ae9f24b9827cd971edab" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.842738 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7716ce667c96bf2149faab4efad1ba8a99d88c43c957ae9f24b9827cd971edab"} err="failed to get container status \"7716ce667c96bf2149faab4efad1ba8a99d88c43c957ae9f24b9827cd971edab\": rpc error: code = NotFound desc = could not find container \"7716ce667c96bf2149faab4efad1ba8a99d88c43c957ae9f24b9827cd971edab\": container with ID starting with 7716ce667c96bf2149faab4efad1ba8a99d88c43c957ae9f24b9827cd971edab not found: ID does not exist" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.842780 4719 scope.go:117] "RemoveContainer" containerID="13db325656d1a7b8fba0fd23d305c8a316caf14e5331449ba517f950eca6753c" Nov 24 08:57:38 crc kubenswrapper[4719]: E1124 08:57:38.843386 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13db325656d1a7b8fba0fd23d305c8a316caf14e5331449ba517f950eca6753c\": container with ID starting with 13db325656d1a7b8fba0fd23d305c8a316caf14e5331449ba517f950eca6753c not found: ID does not exist" containerID="13db325656d1a7b8fba0fd23d305c8a316caf14e5331449ba517f950eca6753c" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.843415 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13db325656d1a7b8fba0fd23d305c8a316caf14e5331449ba517f950eca6753c"} err="failed to get container status \"13db325656d1a7b8fba0fd23d305c8a316caf14e5331449ba517f950eca6753c\": rpc error: code = NotFound desc = could not find container \"13db325656d1a7b8fba0fd23d305c8a316caf14e5331449ba517f950eca6753c\": container with ID starting with 13db325656d1a7b8fba0fd23d305c8a316caf14e5331449ba517f950eca6753c not found: ID does not exist" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.843434 4719 scope.go:117] "RemoveContainer" containerID="8d229ccfea1c8ab84e854f4ff482d51827855c705a18bdd78ee14b6c16862deb" Nov 24 08:57:38 crc kubenswrapper[4719]: E1124 08:57:38.843974 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d229ccfea1c8ab84e854f4ff482d51827855c705a18bdd78ee14b6c16862deb\": container with ID starting with 8d229ccfea1c8ab84e854f4ff482d51827855c705a18bdd78ee14b6c16862deb not found: ID does not exist" containerID="8d229ccfea1c8ab84e854f4ff482d51827855c705a18bdd78ee14b6c16862deb" Nov 24 08:57:38 crc kubenswrapper[4719]: I1124 08:57:38.843993 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d229ccfea1c8ab84e854f4ff482d51827855c705a18bdd78ee14b6c16862deb"} err="failed to get container status \"8d229ccfea1c8ab84e854f4ff482d51827855c705a18bdd78ee14b6c16862deb\": rpc error: code = NotFound desc = could not find container \"8d229ccfea1c8ab84e854f4ff482d51827855c705a18bdd78ee14b6c16862deb\": container with ID starting with 8d229ccfea1c8ab84e854f4ff482d51827855c705a18bdd78ee14b6c16862deb not found: ID does not exist" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.277909 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nn46q"] Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.278415 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nn46q" podUID="687a3665-1f60-48cf-ad90-013c77a6fefb" containerName="registry-server" containerID="cri-o://c6cfff1d6e7819a678aa1892ca51220cb2e092024f5ae989784ee1108aff2d5c" gracePeriod=2 Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.534809 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ec96ae-4756-4249-b370-ce98fbe47db0" path="/var/lib/kubelet/pods/45ec96ae-4756-4249-b370-ce98fbe47db0/volumes" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.535748 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d18df24f-85d5-4acf-9469-1bd2c80a3ea6" path="/var/lib/kubelet/pods/d18df24f-85d5-4acf-9469-1bd2c80a3ea6/volumes" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.651753 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.714636 4719 generic.go:334] "Generic (PLEG): container finished" podID="687a3665-1f60-48cf-ad90-013c77a6fefb" containerID="c6cfff1d6e7819a678aa1892ca51220cb2e092024f5ae989784ee1108aff2d5c" exitCode=0 Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.714701 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nn46q" event={"ID":"687a3665-1f60-48cf-ad90-013c77a6fefb","Type":"ContainerDied","Data":"c6cfff1d6e7819a678aa1892ca51220cb2e092024f5ae989784ee1108aff2d5c"} Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.714714 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nn46q" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.714779 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nn46q" event={"ID":"687a3665-1f60-48cf-ad90-013c77a6fefb","Type":"ContainerDied","Data":"9ac5a3919df87325fd491d02c1845beaa4af8d4f4f97403d943d30406d750bdd"} Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.714807 4719 scope.go:117] "RemoveContainer" containerID="c6cfff1d6e7819a678aa1892ca51220cb2e092024f5ae989784ee1108aff2d5c" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.734388 4719 scope.go:117] "RemoveContainer" containerID="61f6f0a0a272ac08d9f031d6b526ed94457b7578448bf3da26c1e9c9064dfb75" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.751260 4719 scope.go:117] "RemoveContainer" containerID="f6486f062e64cc507d0e7805e04cdd9a698ba96307668c433b5e6e494df52944" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.757504 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/687a3665-1f60-48cf-ad90-013c77a6fefb-utilities\") pod \"687a3665-1f60-48cf-ad90-013c77a6fefb\" (UID: \"687a3665-1f60-48cf-ad90-013c77a6fefb\") " Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.757624 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/687a3665-1f60-48cf-ad90-013c77a6fefb-catalog-content\") pod \"687a3665-1f60-48cf-ad90-013c77a6fefb\" (UID: \"687a3665-1f60-48cf-ad90-013c77a6fefb\") " Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.757657 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k99vx\" (UniqueName: \"kubernetes.io/projected/687a3665-1f60-48cf-ad90-013c77a6fefb-kube-api-access-k99vx\") pod \"687a3665-1f60-48cf-ad90-013c77a6fefb\" (UID: \"687a3665-1f60-48cf-ad90-013c77a6fefb\") " Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.758704 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/687a3665-1f60-48cf-ad90-013c77a6fefb-utilities" (OuterVolumeSpecName: "utilities") pod "687a3665-1f60-48cf-ad90-013c77a6fefb" (UID: "687a3665-1f60-48cf-ad90-013c77a6fefb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.766285 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/687a3665-1f60-48cf-ad90-013c77a6fefb-kube-api-access-k99vx" (OuterVolumeSpecName: "kube-api-access-k99vx") pod "687a3665-1f60-48cf-ad90-013c77a6fefb" (UID: "687a3665-1f60-48cf-ad90-013c77a6fefb"). InnerVolumeSpecName "kube-api-access-k99vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.775029 4719 scope.go:117] "RemoveContainer" containerID="c6cfff1d6e7819a678aa1892ca51220cb2e092024f5ae989784ee1108aff2d5c" Nov 24 08:57:40 crc kubenswrapper[4719]: E1124 08:57:40.776000 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6cfff1d6e7819a678aa1892ca51220cb2e092024f5ae989784ee1108aff2d5c\": container with ID starting with c6cfff1d6e7819a678aa1892ca51220cb2e092024f5ae989784ee1108aff2d5c not found: ID does not exist" containerID="c6cfff1d6e7819a678aa1892ca51220cb2e092024f5ae989784ee1108aff2d5c" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.776043 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6cfff1d6e7819a678aa1892ca51220cb2e092024f5ae989784ee1108aff2d5c"} err="failed to get container status \"c6cfff1d6e7819a678aa1892ca51220cb2e092024f5ae989784ee1108aff2d5c\": rpc error: code = NotFound desc = could not find container \"c6cfff1d6e7819a678aa1892ca51220cb2e092024f5ae989784ee1108aff2d5c\": container with ID starting with c6cfff1d6e7819a678aa1892ca51220cb2e092024f5ae989784ee1108aff2d5c not found: ID does not exist" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.776066 4719 scope.go:117] "RemoveContainer" containerID="61f6f0a0a272ac08d9f031d6b526ed94457b7578448bf3da26c1e9c9064dfb75" Nov 24 08:57:40 crc kubenswrapper[4719]: E1124 08:57:40.776391 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f6f0a0a272ac08d9f031d6b526ed94457b7578448bf3da26c1e9c9064dfb75\": container with ID starting with 61f6f0a0a272ac08d9f031d6b526ed94457b7578448bf3da26c1e9c9064dfb75 not found: ID does not exist" containerID="61f6f0a0a272ac08d9f031d6b526ed94457b7578448bf3da26c1e9c9064dfb75" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.776424 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f6f0a0a272ac08d9f031d6b526ed94457b7578448bf3da26c1e9c9064dfb75"} err="failed to get container status \"61f6f0a0a272ac08d9f031d6b526ed94457b7578448bf3da26c1e9c9064dfb75\": rpc error: code = NotFound desc = could not find container \"61f6f0a0a272ac08d9f031d6b526ed94457b7578448bf3da26c1e9c9064dfb75\": container with ID starting with 61f6f0a0a272ac08d9f031d6b526ed94457b7578448bf3da26c1e9c9064dfb75 not found: ID does not exist" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.776441 4719 scope.go:117] "RemoveContainer" containerID="f6486f062e64cc507d0e7805e04cdd9a698ba96307668c433b5e6e494df52944" Nov 24 08:57:40 crc kubenswrapper[4719]: E1124 08:57:40.776848 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6486f062e64cc507d0e7805e04cdd9a698ba96307668c433b5e6e494df52944\": container with ID starting with f6486f062e64cc507d0e7805e04cdd9a698ba96307668c433b5e6e494df52944 not found: ID does not exist" containerID="f6486f062e64cc507d0e7805e04cdd9a698ba96307668c433b5e6e494df52944" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.776894 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6486f062e64cc507d0e7805e04cdd9a698ba96307668c433b5e6e494df52944"} err="failed to get container status \"f6486f062e64cc507d0e7805e04cdd9a698ba96307668c433b5e6e494df52944\": rpc error: code = NotFound desc = could not find container \"f6486f062e64cc507d0e7805e04cdd9a698ba96307668c433b5e6e494df52944\": container with ID starting with f6486f062e64cc507d0e7805e04cdd9a698ba96307668c433b5e6e494df52944 not found: ID does not exist" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.783376 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/687a3665-1f60-48cf-ad90-013c77a6fefb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "687a3665-1f60-48cf-ad90-013c77a6fefb" (UID: "687a3665-1f60-48cf-ad90-013c77a6fefb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.860174 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/687a3665-1f60-48cf-ad90-013c77a6fefb-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.860217 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/687a3665-1f60-48cf-ad90-013c77a6fefb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:40 crc kubenswrapper[4719]: I1124 08:57:40.860265 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k99vx\" (UniqueName: \"kubernetes.io/projected/687a3665-1f60-48cf-ad90-013c77a6fefb-kube-api-access-k99vx\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.045559 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nn46q"] Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.045615 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nn46q"] Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.284163 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kb6cr"] Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.284627 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kb6cr" podUID="752be1f4-8bf3-403b-a203-bae1d69d05bb" containerName="registry-server" containerID="cri-o://45b5133b06d0595ae9cae8e85ae12234f8cecfa2b3228b7d38fa7f60fd03af5c" gracePeriod=2 Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.649615 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.722259 4719 generic.go:334] "Generic (PLEG): container finished" podID="752be1f4-8bf3-403b-a203-bae1d69d05bb" containerID="45b5133b06d0595ae9cae8e85ae12234f8cecfa2b3228b7d38fa7f60fd03af5c" exitCode=0 Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.722317 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb6cr" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.722331 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb6cr" event={"ID":"752be1f4-8bf3-403b-a203-bae1d69d05bb","Type":"ContainerDied","Data":"45b5133b06d0595ae9cae8e85ae12234f8cecfa2b3228b7d38fa7f60fd03af5c"} Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.722750 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb6cr" event={"ID":"752be1f4-8bf3-403b-a203-bae1d69d05bb","Type":"ContainerDied","Data":"078a37c3166ba3c4ed74be993387b29acba39dd6f19f237af7528b5ed2f81488"} Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.722771 4719 scope.go:117] "RemoveContainer" containerID="45b5133b06d0595ae9cae8e85ae12234f8cecfa2b3228b7d38fa7f60fd03af5c" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.739949 4719 scope.go:117] "RemoveContainer" containerID="0e16f36bc2241cd23eb6ff40488bb260c59807ea3b86cd0340ebf62a8c628c93" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.754070 4719 scope.go:117] "RemoveContainer" containerID="91f8d514bde78a8abad141e071cec6b19fdd2b0c6db62a2b4f205d6441501814" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.772307 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/752be1f4-8bf3-403b-a203-bae1d69d05bb-catalog-content\") pod \"752be1f4-8bf3-403b-a203-bae1d69d05bb\" (UID: \"752be1f4-8bf3-403b-a203-bae1d69d05bb\") " Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.772467 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc5n4\" (UniqueName: \"kubernetes.io/projected/752be1f4-8bf3-403b-a203-bae1d69d05bb-kube-api-access-kc5n4\") pod \"752be1f4-8bf3-403b-a203-bae1d69d05bb\" (UID: \"752be1f4-8bf3-403b-a203-bae1d69d05bb\") " Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.772496 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/752be1f4-8bf3-403b-a203-bae1d69d05bb-utilities\") pod \"752be1f4-8bf3-403b-a203-bae1d69d05bb\" (UID: \"752be1f4-8bf3-403b-a203-bae1d69d05bb\") " Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.773143 4719 scope.go:117] "RemoveContainer" containerID="45b5133b06d0595ae9cae8e85ae12234f8cecfa2b3228b7d38fa7f60fd03af5c" Nov 24 08:57:41 crc kubenswrapper[4719]: E1124 08:57:41.773623 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45b5133b06d0595ae9cae8e85ae12234f8cecfa2b3228b7d38fa7f60fd03af5c\": container with ID starting with 45b5133b06d0595ae9cae8e85ae12234f8cecfa2b3228b7d38fa7f60fd03af5c not found: ID does not exist" containerID="45b5133b06d0595ae9cae8e85ae12234f8cecfa2b3228b7d38fa7f60fd03af5c" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.773675 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45b5133b06d0595ae9cae8e85ae12234f8cecfa2b3228b7d38fa7f60fd03af5c"} err="failed to get container status \"45b5133b06d0595ae9cae8e85ae12234f8cecfa2b3228b7d38fa7f60fd03af5c\": rpc error: code = NotFound desc = could not find container \"45b5133b06d0595ae9cae8e85ae12234f8cecfa2b3228b7d38fa7f60fd03af5c\": container with ID starting with 45b5133b06d0595ae9cae8e85ae12234f8cecfa2b3228b7d38fa7f60fd03af5c not found: ID does not exist" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.773711 4719 scope.go:117] "RemoveContainer" containerID="0e16f36bc2241cd23eb6ff40488bb260c59807ea3b86cd0340ebf62a8c628c93" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.773741 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/752be1f4-8bf3-403b-a203-bae1d69d05bb-utilities" (OuterVolumeSpecName: "utilities") pod "752be1f4-8bf3-403b-a203-bae1d69d05bb" (UID: "752be1f4-8bf3-403b-a203-bae1d69d05bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:41 crc kubenswrapper[4719]: E1124 08:57:41.774139 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e16f36bc2241cd23eb6ff40488bb260c59807ea3b86cd0340ebf62a8c628c93\": container with ID starting with 0e16f36bc2241cd23eb6ff40488bb260c59807ea3b86cd0340ebf62a8c628c93 not found: ID does not exist" containerID="0e16f36bc2241cd23eb6ff40488bb260c59807ea3b86cd0340ebf62a8c628c93" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.774179 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e16f36bc2241cd23eb6ff40488bb260c59807ea3b86cd0340ebf62a8c628c93"} err="failed to get container status \"0e16f36bc2241cd23eb6ff40488bb260c59807ea3b86cd0340ebf62a8c628c93\": rpc error: code = NotFound desc = could not find container \"0e16f36bc2241cd23eb6ff40488bb260c59807ea3b86cd0340ebf62a8c628c93\": container with ID starting with 0e16f36bc2241cd23eb6ff40488bb260c59807ea3b86cd0340ebf62a8c628c93 not found: ID does not exist" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.774207 4719 scope.go:117] "RemoveContainer" containerID="91f8d514bde78a8abad141e071cec6b19fdd2b0c6db62a2b4f205d6441501814" Nov 24 08:57:41 crc kubenswrapper[4719]: E1124 08:57:41.774580 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91f8d514bde78a8abad141e071cec6b19fdd2b0c6db62a2b4f205d6441501814\": container with ID starting with 91f8d514bde78a8abad141e071cec6b19fdd2b0c6db62a2b4f205d6441501814 not found: ID does not exist" containerID="91f8d514bde78a8abad141e071cec6b19fdd2b0c6db62a2b4f205d6441501814" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.774604 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91f8d514bde78a8abad141e071cec6b19fdd2b0c6db62a2b4f205d6441501814"} err="failed to get container status \"91f8d514bde78a8abad141e071cec6b19fdd2b0c6db62a2b4f205d6441501814\": rpc error: code = NotFound desc = could not find container \"91f8d514bde78a8abad141e071cec6b19fdd2b0c6db62a2b4f205d6441501814\": container with ID starting with 91f8d514bde78a8abad141e071cec6b19fdd2b0c6db62a2b4f205d6441501814 not found: ID does not exist" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.778816 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/752be1f4-8bf3-403b-a203-bae1d69d05bb-kube-api-access-kc5n4" (OuterVolumeSpecName: "kube-api-access-kc5n4") pod "752be1f4-8bf3-403b-a203-bae1d69d05bb" (UID: "752be1f4-8bf3-403b-a203-bae1d69d05bb"). InnerVolumeSpecName "kube-api-access-kc5n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.864249 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/752be1f4-8bf3-403b-a203-bae1d69d05bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "752be1f4-8bf3-403b-a203-bae1d69d05bb" (UID: "752be1f4-8bf3-403b-a203-bae1d69d05bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.874141 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/752be1f4-8bf3-403b-a203-bae1d69d05bb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.874188 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc5n4\" (UniqueName: \"kubernetes.io/projected/752be1f4-8bf3-403b-a203-bae1d69d05bb-kube-api-access-kc5n4\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:41 crc kubenswrapper[4719]: I1124 08:57:41.874208 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/752be1f4-8bf3-403b-a203-bae1d69d05bb-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:42 crc kubenswrapper[4719]: I1124 08:57:42.055583 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kb6cr"] Nov 24 08:57:42 crc kubenswrapper[4719]: I1124 08:57:42.058931 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kb6cr"] Nov 24 08:57:42 crc kubenswrapper[4719]: I1124 08:57:42.527861 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687a3665-1f60-48cf-ad90-013c77a6fefb" path="/var/lib/kubelet/pods/687a3665-1f60-48cf-ad90-013c77a6fefb/volumes" Nov 24 08:57:42 crc kubenswrapper[4719]: I1124 08:57:42.528680 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="752be1f4-8bf3-403b-a203-bae1d69d05bb" path="/var/lib/kubelet/pods/752be1f4-8bf3-403b-a203-bae1d69d05bb/volumes" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.045366 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ljp9t"] Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.045998 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ljp9t" podUID="ff5bf07f-1775-4310-a0b3-5306a4202228" containerName="registry-server" containerID="cri-o://286aaee7f23359b74832468af13d2eae62261bcee73ea866f33e79e2c690493b" gracePeriod=30 Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.057854 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mjzxt"] Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.058202 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mjzxt" podUID="30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" containerName="registry-server" containerID="cri-o://0c64c0d0e7cb9c67659683fd409c5df380d4306bad73f135542bbbac665a4a21" gracePeriod=30 Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.070011 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gtqd7"] Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.070349 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" podUID="76540cf5-0cd5-4282-b3c3-dd12105f0d4e" containerName="marketplace-operator" containerID="cri-o://dae3abdbd7a485cd3dd730b07284c8da8c6a855d0cdaae80eb0b1d52da596dc5" gracePeriod=30 Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.073718 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszz2"] Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.073977 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lszz2" podUID="f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" containerName="registry-server" containerID="cri-o://f6d9d144552ddb6da81cf6eebc7ca32dc443cf0bf0d0a61363adb2b30f3f6b73" gracePeriod=30 Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.080389 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sw8vr"] Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.080694 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sw8vr" podUID="d599ee52-0a8d-4f3b-8ffe-624b8d580382" containerName="registry-server" containerID="cri-o://f5f89ddd8f79e30e84f64032b84ac4a34ee77643a0c2897859d59d13eaac6cfe" gracePeriod=30 Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098195 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mlglm"] Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.098547 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687a3665-1f60-48cf-ad90-013c77a6fefb" containerName="extract-utilities" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098574 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="687a3665-1f60-48cf-ad90-013c77a6fefb" containerName="extract-utilities" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.098592 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752be1f4-8bf3-403b-a203-bae1d69d05bb" containerName="extract-content" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098600 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="752be1f4-8bf3-403b-a203-bae1d69d05bb" containerName="extract-content" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.098612 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18df24f-85d5-4acf-9469-1bd2c80a3ea6" containerName="extract-utilities" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098620 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18df24f-85d5-4acf-9469-1bd2c80a3ea6" containerName="extract-utilities" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.098630 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ec96ae-4756-4249-b370-ce98fbe47db0" containerName="extract-utilities" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098640 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ec96ae-4756-4249-b370-ce98fbe47db0" containerName="extract-utilities" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.098658 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ec96ae-4756-4249-b370-ce98fbe47db0" containerName="extract-content" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098672 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ec96ae-4756-4249-b370-ce98fbe47db0" containerName="extract-content" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.098682 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752be1f4-8bf3-403b-a203-bae1d69d05bb" containerName="registry-server" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098690 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="752be1f4-8bf3-403b-a203-bae1d69d05bb" containerName="registry-server" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.098702 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687a3665-1f60-48cf-ad90-013c77a6fefb" containerName="extract-content" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098709 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="687a3665-1f60-48cf-ad90-013c77a6fefb" containerName="extract-content" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.098719 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18df24f-85d5-4acf-9469-1bd2c80a3ea6" containerName="registry-server" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098727 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18df24f-85d5-4acf-9469-1bd2c80a3ea6" containerName="registry-server" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.098735 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ec96ae-4756-4249-b370-ce98fbe47db0" containerName="registry-server" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098742 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ec96ae-4756-4249-b370-ce98fbe47db0" containerName="registry-server" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.098755 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752be1f4-8bf3-403b-a203-bae1d69d05bb" containerName="extract-utilities" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098762 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="752be1f4-8bf3-403b-a203-bae1d69d05bb" containerName="extract-utilities" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.098773 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687a3665-1f60-48cf-ad90-013c77a6fefb" containerName="registry-server" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098781 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="687a3665-1f60-48cf-ad90-013c77a6fefb" containerName="registry-server" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.098792 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18df24f-85d5-4acf-9469-1bd2c80a3ea6" containerName="extract-content" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098801 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18df24f-85d5-4acf-9469-1bd2c80a3ea6" containerName="extract-content" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098936 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="687a3665-1f60-48cf-ad90-013c77a6fefb" containerName="registry-server" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098952 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="752be1f4-8bf3-403b-a203-bae1d69d05bb" containerName="registry-server" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098959 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18df24f-85d5-4acf-9469-1bd2c80a3ea6" containerName="registry-server" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.098975 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ec96ae-4756-4249-b370-ce98fbe47db0" containerName="registry-server" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.099636 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.112208 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mlglm"] Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.214246 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86j8d\" (UniqueName: \"kubernetes.io/projected/304abde6-d85e-4425-93f5-af2b501ab1c9-kube-api-access-86j8d\") pod \"marketplace-operator-79b997595-mlglm\" (UID: \"304abde6-d85e-4425-93f5-af2b501ab1c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.214407 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/304abde6-d85e-4425-93f5-af2b501ab1c9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mlglm\" (UID: \"304abde6-d85e-4425-93f5-af2b501ab1c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.214470 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/304abde6-d85e-4425-93f5-af2b501ab1c9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mlglm\" (UID: \"304abde6-d85e-4425-93f5-af2b501ab1c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.316134 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86j8d\" (UniqueName: \"kubernetes.io/projected/304abde6-d85e-4425-93f5-af2b501ab1c9-kube-api-access-86j8d\") pod \"marketplace-operator-79b997595-mlglm\" (UID: \"304abde6-d85e-4425-93f5-af2b501ab1c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.316231 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/304abde6-d85e-4425-93f5-af2b501ab1c9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mlglm\" (UID: \"304abde6-d85e-4425-93f5-af2b501ab1c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.316337 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/304abde6-d85e-4425-93f5-af2b501ab1c9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mlglm\" (UID: \"304abde6-d85e-4425-93f5-af2b501ab1c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.317864 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/304abde6-d85e-4425-93f5-af2b501ab1c9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mlglm\" (UID: \"304abde6-d85e-4425-93f5-af2b501ab1c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.324071 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/304abde6-d85e-4425-93f5-af2b501ab1c9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mlglm\" (UID: \"304abde6-d85e-4425-93f5-af2b501ab1c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.336123 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86j8d\" (UniqueName: \"kubernetes.io/projected/304abde6-d85e-4425-93f5-af2b501ab1c9-kube-api-access-86j8d\") pod \"marketplace-operator-79b997595-mlglm\" (UID: \"304abde6-d85e-4425-93f5-af2b501ab1c9\") " pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.384512 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.518736 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.602068 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.628112 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-marketplace-operator-metrics\") pod \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\" (UID: \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.628185 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch4nr\" (UniqueName: \"kubernetes.io/projected/ff5bf07f-1775-4310-a0b3-5306a4202228-kube-api-access-ch4nr\") pod \"ff5bf07f-1775-4310-a0b3-5306a4202228\" (UID: \"ff5bf07f-1775-4310-a0b3-5306a4202228\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.628213 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4qzh\" (UniqueName: \"kubernetes.io/projected/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-kube-api-access-v4qzh\") pod \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\" (UID: \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.628261 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-marketplace-trusted-ca\") pod \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\" (UID: \"76540cf5-0cd5-4282-b3c3-dd12105f0d4e\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.629136 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5bf07f-1775-4310-a0b3-5306a4202228-utilities\") pod \"ff5bf07f-1775-4310-a0b3-5306a4202228\" (UID: \"ff5bf07f-1775-4310-a0b3-5306a4202228\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.629158 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5bf07f-1775-4310-a0b3-5306a4202228-catalog-content\") pod \"ff5bf07f-1775-4310-a0b3-5306a4202228\" (UID: \"ff5bf07f-1775-4310-a0b3-5306a4202228\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.629490 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.630613 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "76540cf5-0cd5-4282-b3c3-dd12105f0d4e" (UID: "76540cf5-0cd5-4282-b3c3-dd12105f0d4e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.638545 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "76540cf5-0cd5-4282-b3c3-dd12105f0d4e" (UID: "76540cf5-0cd5-4282-b3c3-dd12105f0d4e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.638552 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-kube-api-access-v4qzh" (OuterVolumeSpecName: "kube-api-access-v4qzh") pod "76540cf5-0cd5-4282-b3c3-dd12105f0d4e" (UID: "76540cf5-0cd5-4282-b3c3-dd12105f0d4e"). InnerVolumeSpecName "kube-api-access-v4qzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.639323 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff5bf07f-1775-4310-a0b3-5306a4202228-utilities" (OuterVolumeSpecName: "utilities") pod "ff5bf07f-1775-4310-a0b3-5306a4202228" (UID: "ff5bf07f-1775-4310-a0b3-5306a4202228"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.640151 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5bf07f-1775-4310-a0b3-5306a4202228-kube-api-access-ch4nr" (OuterVolumeSpecName: "kube-api-access-ch4nr") pod "ff5bf07f-1775-4310-a0b3-5306a4202228" (UID: "ff5bf07f-1775-4310-a0b3-5306a4202228"). InnerVolumeSpecName "kube-api-access-ch4nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.683574 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.698262 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.738503 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plvjn\" (UniqueName: \"kubernetes.io/projected/d599ee52-0a8d-4f3b-8ffe-624b8d580382-kube-api-access-plvjn\") pod \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\" (UID: \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.738594 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-catalog-content\") pod \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\" (UID: \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.738637 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d599ee52-0a8d-4f3b-8ffe-624b8d580382-utilities\") pod \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\" (UID: \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.738705 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-utilities\") pod \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\" (UID: \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.738728 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d599ee52-0a8d-4f3b-8ffe-624b8d580382-catalog-content\") pod \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\" (UID: \"d599ee52-0a8d-4f3b-8ffe-624b8d580382\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.738760 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ckjc\" (UniqueName: \"kubernetes.io/projected/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-kube-api-access-6ckjc\") pod \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\" (UID: \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.738805 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-catalog-content\") pod \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\" (UID: \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.738849 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-utilities\") pod \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\" (UID: \"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.738891 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf4gs\" (UniqueName: \"kubernetes.io/projected/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-kube-api-access-tf4gs\") pod \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\" (UID: \"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3\") " Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.739168 4719 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.739190 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch4nr\" (UniqueName: \"kubernetes.io/projected/ff5bf07f-1775-4310-a0b3-5306a4202228-kube-api-access-ch4nr\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.739200 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4qzh\" (UniqueName: \"kubernetes.io/projected/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-kube-api-access-v4qzh\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.739212 4719 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76540cf5-0cd5-4282-b3c3-dd12105f0d4e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.739224 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5bf07f-1775-4310-a0b3-5306a4202228-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.744530 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-kube-api-access-tf4gs" (OuterVolumeSpecName: "kube-api-access-tf4gs") pod "f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" (UID: "f4dd48f4-5b1b-4e66-9a2a-38d5005672b3"). InnerVolumeSpecName "kube-api-access-tf4gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.746933 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff5bf07f-1775-4310-a0b3-5306a4202228-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff5bf07f-1775-4310-a0b3-5306a4202228" (UID: "ff5bf07f-1775-4310-a0b3-5306a4202228"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.754058 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d599ee52-0a8d-4f3b-8ffe-624b8d580382-utilities" (OuterVolumeSpecName: "utilities") pod "d599ee52-0a8d-4f3b-8ffe-624b8d580382" (UID: "d599ee52-0a8d-4f3b-8ffe-624b8d580382"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.754713 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-utilities" (OuterVolumeSpecName: "utilities") pod "30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" (UID: "30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.755017 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d599ee52-0a8d-4f3b-8ffe-624b8d580382-kube-api-access-plvjn" (OuterVolumeSpecName: "kube-api-access-plvjn") pod "d599ee52-0a8d-4f3b-8ffe-624b8d580382" (UID: "d599ee52-0a8d-4f3b-8ffe-624b8d580382"). InnerVolumeSpecName "kube-api-access-plvjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.755143 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-utilities" (OuterVolumeSpecName: "utilities") pod "f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" (UID: "f4dd48f4-5b1b-4e66-9a2a-38d5005672b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.755790 4719 generic.go:334] "Generic (PLEG): container finished" podID="76540cf5-0cd5-4282-b3c3-dd12105f0d4e" containerID="dae3abdbd7a485cd3dd730b07284c8da8c6a855d0cdaae80eb0b1d52da596dc5" exitCode=0 Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.755878 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" event={"ID":"76540cf5-0cd5-4282-b3c3-dd12105f0d4e","Type":"ContainerDied","Data":"dae3abdbd7a485cd3dd730b07284c8da8c6a855d0cdaae80eb0b1d52da596dc5"} Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.755929 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" event={"ID":"76540cf5-0cd5-4282-b3c3-dd12105f0d4e","Type":"ContainerDied","Data":"6ff5eb1f5bb61c9bae7d66a488056515df664553e848c96b5e054b8eeb8a30e6"} Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.755948 4719 scope.go:117] "RemoveContainer" containerID="dae3abdbd7a485cd3dd730b07284c8da8c6a855d0cdaae80eb0b1d52da596dc5" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.755980 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gtqd7" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.765379 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-kube-api-access-6ckjc" (OuterVolumeSpecName: "kube-api-access-6ckjc") pod "30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" (UID: "30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5"). InnerVolumeSpecName "kube-api-access-6ckjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.772354 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" (UID: "f4dd48f4-5b1b-4e66-9a2a-38d5005672b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.776575 4719 generic.go:334] "Generic (PLEG): container finished" podID="ff5bf07f-1775-4310-a0b3-5306a4202228" containerID="286aaee7f23359b74832468af13d2eae62261bcee73ea866f33e79e2c690493b" exitCode=0 Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.776688 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljp9t" event={"ID":"ff5bf07f-1775-4310-a0b3-5306a4202228","Type":"ContainerDied","Data":"286aaee7f23359b74832468af13d2eae62261bcee73ea866f33e79e2c690493b"} Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.776719 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ljp9t" event={"ID":"ff5bf07f-1775-4310-a0b3-5306a4202228","Type":"ContainerDied","Data":"6af65096cd566564aace66ead967740b5178239dcd776419e469ad8d6d171f3c"} Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.781272 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ljp9t" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.786313 4719 scope.go:117] "RemoveContainer" containerID="dae3abdbd7a485cd3dd730b07284c8da8c6a855d0cdaae80eb0b1d52da596dc5" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.787371 4719 generic.go:334] "Generic (PLEG): container finished" podID="f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" containerID="f6d9d144552ddb6da81cf6eebc7ca32dc443cf0bf0d0a61363adb2b30f3f6b73" exitCode=0 Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.787565 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lszz2" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.787663 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszz2" event={"ID":"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3","Type":"ContainerDied","Data":"f6d9d144552ddb6da81cf6eebc7ca32dc443cf0bf0d0a61363adb2b30f3f6b73"} Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.787748 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszz2" event={"ID":"f4dd48f4-5b1b-4e66-9a2a-38d5005672b3","Type":"ContainerDied","Data":"a8b0b17ca4588dc1882423816ffbc4a62e3c60fb354e2a392d0409f9aecca8da"} Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.787395 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae3abdbd7a485cd3dd730b07284c8da8c6a855d0cdaae80eb0b1d52da596dc5\": container with ID starting with dae3abdbd7a485cd3dd730b07284c8da8c6a855d0cdaae80eb0b1d52da596dc5 not found: ID does not exist" containerID="dae3abdbd7a485cd3dd730b07284c8da8c6a855d0cdaae80eb0b1d52da596dc5" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.787946 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae3abdbd7a485cd3dd730b07284c8da8c6a855d0cdaae80eb0b1d52da596dc5"} err="failed to get container status \"dae3abdbd7a485cd3dd730b07284c8da8c6a855d0cdaae80eb0b1d52da596dc5\": rpc error: code = NotFound desc = could not find container \"dae3abdbd7a485cd3dd730b07284c8da8c6a855d0cdaae80eb0b1d52da596dc5\": container with ID starting with dae3abdbd7a485cd3dd730b07284c8da8c6a855d0cdaae80eb0b1d52da596dc5 not found: ID does not exist" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.788107 4719 scope.go:117] "RemoveContainer" containerID="286aaee7f23359b74832468af13d2eae62261bcee73ea866f33e79e2c690493b" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.792472 4719 generic.go:334] "Generic (PLEG): container finished" podID="d599ee52-0a8d-4f3b-8ffe-624b8d580382" containerID="f5f89ddd8f79e30e84f64032b84ac4a34ee77643a0c2897859d59d13eaac6cfe" exitCode=0 Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.792644 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sw8vr" event={"ID":"d599ee52-0a8d-4f3b-8ffe-624b8d580382","Type":"ContainerDied","Data":"f5f89ddd8f79e30e84f64032b84ac4a34ee77643a0c2897859d59d13eaac6cfe"} Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.792752 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sw8vr" event={"ID":"d599ee52-0a8d-4f3b-8ffe-624b8d580382","Type":"ContainerDied","Data":"cc4e73d63d422aea0949e4ead62803a1773b0ebbe177f777e7f23a09a4b35b20"} Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.792945 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sw8vr" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.799706 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjzxt" event={"ID":"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5","Type":"ContainerDied","Data":"0c64c0d0e7cb9c67659683fd409c5df380d4306bad73f135542bbbac665a4a21"} Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.799538 4719 generic.go:334] "Generic (PLEG): container finished" podID="30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" containerID="0c64c0d0e7cb9c67659683fd409c5df380d4306bad73f135542bbbac665a4a21" exitCode=0 Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.800015 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjzxt" event={"ID":"30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5","Type":"ContainerDied","Data":"a823c705c434c6663d75b523cb76c9ca65f0ecc5e1e41ed6bcc56d6d6f367756"} Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.799759 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjzxt" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.805648 4719 scope.go:117] "RemoveContainer" containerID="84eafe1e6c754505f9ce9d9850acd3bccc2853f1febae1f7b50997de3854604b" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.828700 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gtqd7"] Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.830282 4719 scope.go:117] "RemoveContainer" containerID="e5769e58559da46c5a70db795390fbcf2e0f16b3cd5bfe919636007fda95cdb7" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.838927 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gtqd7"] Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.847555 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszz2"] Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.848238 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d599ee52-0a8d-4f3b-8ffe-624b8d580382-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.848258 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5bf07f-1775-4310-a0b3-5306a4202228-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.848270 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.848285 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ckjc\" (UniqueName: \"kubernetes.io/projected/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-kube-api-access-6ckjc\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.848299 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.848310 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.848322 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf4gs\" (UniqueName: \"kubernetes.io/projected/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3-kube-api-access-tf4gs\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.848335 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plvjn\" (UniqueName: \"kubernetes.io/projected/d599ee52-0a8d-4f3b-8ffe-624b8d580382-kube-api-access-plvjn\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.853765 4719 scope.go:117] "RemoveContainer" containerID="286aaee7f23359b74832468af13d2eae62261bcee73ea866f33e79e2c690493b" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.855559 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszz2"] Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.860211 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ljp9t"] Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.862732 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286aaee7f23359b74832468af13d2eae62261bcee73ea866f33e79e2c690493b\": container with ID starting with 286aaee7f23359b74832468af13d2eae62261bcee73ea866f33e79e2c690493b not found: ID does not exist" containerID="286aaee7f23359b74832468af13d2eae62261bcee73ea866f33e79e2c690493b" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.862783 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286aaee7f23359b74832468af13d2eae62261bcee73ea866f33e79e2c690493b"} err="failed to get container status \"286aaee7f23359b74832468af13d2eae62261bcee73ea866f33e79e2c690493b\": rpc error: code = NotFound desc = could not find container \"286aaee7f23359b74832468af13d2eae62261bcee73ea866f33e79e2c690493b\": container with ID starting with 286aaee7f23359b74832468af13d2eae62261bcee73ea866f33e79e2c690493b not found: ID does not exist" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.862813 4719 scope.go:117] "RemoveContainer" containerID="84eafe1e6c754505f9ce9d9850acd3bccc2853f1febae1f7b50997de3854604b" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.864648 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84eafe1e6c754505f9ce9d9850acd3bccc2853f1febae1f7b50997de3854604b\": container with ID starting with 84eafe1e6c754505f9ce9d9850acd3bccc2853f1febae1f7b50997de3854604b not found: ID does not exist" containerID="84eafe1e6c754505f9ce9d9850acd3bccc2853f1febae1f7b50997de3854604b" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.864695 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84eafe1e6c754505f9ce9d9850acd3bccc2853f1febae1f7b50997de3854604b"} err="failed to get container status \"84eafe1e6c754505f9ce9d9850acd3bccc2853f1febae1f7b50997de3854604b\": rpc error: code = NotFound desc = could not find container \"84eafe1e6c754505f9ce9d9850acd3bccc2853f1febae1f7b50997de3854604b\": container with ID starting with 84eafe1e6c754505f9ce9d9850acd3bccc2853f1febae1f7b50997de3854604b not found: ID does not exist" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.864723 4719 scope.go:117] "RemoveContainer" containerID="e5769e58559da46c5a70db795390fbcf2e0f16b3cd5bfe919636007fda95cdb7" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.865371 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5769e58559da46c5a70db795390fbcf2e0f16b3cd5bfe919636007fda95cdb7\": container with ID starting with e5769e58559da46c5a70db795390fbcf2e0f16b3cd5bfe919636007fda95cdb7 not found: ID does not exist" containerID="e5769e58559da46c5a70db795390fbcf2e0f16b3cd5bfe919636007fda95cdb7" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.865430 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5769e58559da46c5a70db795390fbcf2e0f16b3cd5bfe919636007fda95cdb7"} err="failed to get container status \"e5769e58559da46c5a70db795390fbcf2e0f16b3cd5bfe919636007fda95cdb7\": rpc error: code = NotFound desc = could not find container \"e5769e58559da46c5a70db795390fbcf2e0f16b3cd5bfe919636007fda95cdb7\": container with ID starting with e5769e58559da46c5a70db795390fbcf2e0f16b3cd5bfe919636007fda95cdb7 not found: ID does not exist" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.865449 4719 scope.go:117] "RemoveContainer" containerID="f6d9d144552ddb6da81cf6eebc7ca32dc443cf0bf0d0a61363adb2b30f3f6b73" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.869465 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ljp9t"] Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.885523 4719 scope.go:117] "RemoveContainer" containerID="250a30f3d17134b781a922887d96afbdf80a30c40b7d71cf3bf628514880bc71" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.899838 4719 scope.go:117] "RemoveContainer" containerID="5808eecdbc6c18f76f0ebc9dc7b0ca6fba986985d2f5c54b9ef729f446b36ada" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.907341 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" (UID: "30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.917407 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d599ee52-0a8d-4f3b-8ffe-624b8d580382-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d599ee52-0a8d-4f3b-8ffe-624b8d580382" (UID: "d599ee52-0a8d-4f3b-8ffe-624b8d580382"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.929800 4719 scope.go:117] "RemoveContainer" containerID="f6d9d144552ddb6da81cf6eebc7ca32dc443cf0bf0d0a61363adb2b30f3f6b73" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.935570 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mlglm"] Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.937599 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d9d144552ddb6da81cf6eebc7ca32dc443cf0bf0d0a61363adb2b30f3f6b73\": container with ID starting with f6d9d144552ddb6da81cf6eebc7ca32dc443cf0bf0d0a61363adb2b30f3f6b73 not found: ID does not exist" containerID="f6d9d144552ddb6da81cf6eebc7ca32dc443cf0bf0d0a61363adb2b30f3f6b73" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.937639 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d9d144552ddb6da81cf6eebc7ca32dc443cf0bf0d0a61363adb2b30f3f6b73"} err="failed to get container status \"f6d9d144552ddb6da81cf6eebc7ca32dc443cf0bf0d0a61363adb2b30f3f6b73\": rpc error: code = NotFound desc = could not find container \"f6d9d144552ddb6da81cf6eebc7ca32dc443cf0bf0d0a61363adb2b30f3f6b73\": container with ID starting with f6d9d144552ddb6da81cf6eebc7ca32dc443cf0bf0d0a61363adb2b30f3f6b73 not found: ID does not exist" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.937666 4719 scope.go:117] "RemoveContainer" containerID="250a30f3d17134b781a922887d96afbdf80a30c40b7d71cf3bf628514880bc71" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.938156 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"250a30f3d17134b781a922887d96afbdf80a30c40b7d71cf3bf628514880bc71\": container with ID starting with 250a30f3d17134b781a922887d96afbdf80a30c40b7d71cf3bf628514880bc71 not found: ID does not exist" containerID="250a30f3d17134b781a922887d96afbdf80a30c40b7d71cf3bf628514880bc71" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.938188 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250a30f3d17134b781a922887d96afbdf80a30c40b7d71cf3bf628514880bc71"} err="failed to get container status \"250a30f3d17134b781a922887d96afbdf80a30c40b7d71cf3bf628514880bc71\": rpc error: code = NotFound desc = could not find container \"250a30f3d17134b781a922887d96afbdf80a30c40b7d71cf3bf628514880bc71\": container with ID starting with 250a30f3d17134b781a922887d96afbdf80a30c40b7d71cf3bf628514880bc71 not found: ID does not exist" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.938208 4719 scope.go:117] "RemoveContainer" containerID="5808eecdbc6c18f76f0ebc9dc7b0ca6fba986985d2f5c54b9ef729f446b36ada" Nov 24 08:57:45 crc kubenswrapper[4719]: E1124 08:57:45.938400 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5808eecdbc6c18f76f0ebc9dc7b0ca6fba986985d2f5c54b9ef729f446b36ada\": container with ID starting with 5808eecdbc6c18f76f0ebc9dc7b0ca6fba986985d2f5c54b9ef729f446b36ada not found: ID does not exist" containerID="5808eecdbc6c18f76f0ebc9dc7b0ca6fba986985d2f5c54b9ef729f446b36ada" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.938421 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5808eecdbc6c18f76f0ebc9dc7b0ca6fba986985d2f5c54b9ef729f446b36ada"} err="failed to get container status \"5808eecdbc6c18f76f0ebc9dc7b0ca6fba986985d2f5c54b9ef729f446b36ada\": rpc error: code = NotFound desc = could not find container \"5808eecdbc6c18f76f0ebc9dc7b0ca6fba986985d2f5c54b9ef729f446b36ada\": container with ID starting with 5808eecdbc6c18f76f0ebc9dc7b0ca6fba986985d2f5c54b9ef729f446b36ada not found: ID does not exist" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.938434 4719 scope.go:117] "RemoveContainer" containerID="f5f89ddd8f79e30e84f64032b84ac4a34ee77643a0c2897859d59d13eaac6cfe" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.949686 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.949716 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d599ee52-0a8d-4f3b-8ffe-624b8d580382-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.963532 4719 scope.go:117] "RemoveContainer" containerID="a7e6e251e004ef58d05d86a3af34c7c0ce4649da47b86f084d6c15a0062e9d6a" Nov 24 08:57:45 crc kubenswrapper[4719]: I1124 08:57:45.982491 4719 scope.go:117] "RemoveContainer" containerID="d81ecf8f0247b1941c76f4a7a03b7fe2aea0809fb5967c2f6198deee6b2fe49c" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.009699 4719 scope.go:117] "RemoveContainer" containerID="f5f89ddd8f79e30e84f64032b84ac4a34ee77643a0c2897859d59d13eaac6cfe" Nov 24 08:57:46 crc kubenswrapper[4719]: E1124 08:57:46.010126 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5f89ddd8f79e30e84f64032b84ac4a34ee77643a0c2897859d59d13eaac6cfe\": container with ID starting with f5f89ddd8f79e30e84f64032b84ac4a34ee77643a0c2897859d59d13eaac6cfe not found: ID does not exist" containerID="f5f89ddd8f79e30e84f64032b84ac4a34ee77643a0c2897859d59d13eaac6cfe" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.010157 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5f89ddd8f79e30e84f64032b84ac4a34ee77643a0c2897859d59d13eaac6cfe"} err="failed to get container status \"f5f89ddd8f79e30e84f64032b84ac4a34ee77643a0c2897859d59d13eaac6cfe\": rpc error: code = NotFound desc = could not find container \"f5f89ddd8f79e30e84f64032b84ac4a34ee77643a0c2897859d59d13eaac6cfe\": container with ID starting with f5f89ddd8f79e30e84f64032b84ac4a34ee77643a0c2897859d59d13eaac6cfe not found: ID does not exist" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.010177 4719 scope.go:117] "RemoveContainer" containerID="a7e6e251e004ef58d05d86a3af34c7c0ce4649da47b86f084d6c15a0062e9d6a" Nov 24 08:57:46 crc kubenswrapper[4719]: E1124 08:57:46.010407 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e6e251e004ef58d05d86a3af34c7c0ce4649da47b86f084d6c15a0062e9d6a\": container with ID starting with a7e6e251e004ef58d05d86a3af34c7c0ce4649da47b86f084d6c15a0062e9d6a not found: ID does not exist" containerID="a7e6e251e004ef58d05d86a3af34c7c0ce4649da47b86f084d6c15a0062e9d6a" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.010433 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e6e251e004ef58d05d86a3af34c7c0ce4649da47b86f084d6c15a0062e9d6a"} err="failed to get container status \"a7e6e251e004ef58d05d86a3af34c7c0ce4649da47b86f084d6c15a0062e9d6a\": rpc error: code = NotFound desc = could not find container \"a7e6e251e004ef58d05d86a3af34c7c0ce4649da47b86f084d6c15a0062e9d6a\": container with ID starting with a7e6e251e004ef58d05d86a3af34c7c0ce4649da47b86f084d6c15a0062e9d6a not found: ID does not exist" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.010448 4719 scope.go:117] "RemoveContainer" containerID="d81ecf8f0247b1941c76f4a7a03b7fe2aea0809fb5967c2f6198deee6b2fe49c" Nov 24 08:57:46 crc kubenswrapper[4719]: E1124 08:57:46.010659 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81ecf8f0247b1941c76f4a7a03b7fe2aea0809fb5967c2f6198deee6b2fe49c\": container with ID starting with d81ecf8f0247b1941c76f4a7a03b7fe2aea0809fb5967c2f6198deee6b2fe49c not found: ID does not exist" containerID="d81ecf8f0247b1941c76f4a7a03b7fe2aea0809fb5967c2f6198deee6b2fe49c" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.010682 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81ecf8f0247b1941c76f4a7a03b7fe2aea0809fb5967c2f6198deee6b2fe49c"} err="failed to get container status \"d81ecf8f0247b1941c76f4a7a03b7fe2aea0809fb5967c2f6198deee6b2fe49c\": rpc error: code = NotFound desc = could not find container \"d81ecf8f0247b1941c76f4a7a03b7fe2aea0809fb5967c2f6198deee6b2fe49c\": container with ID starting with d81ecf8f0247b1941c76f4a7a03b7fe2aea0809fb5967c2f6198deee6b2fe49c not found: ID does not exist" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.010695 4719 scope.go:117] "RemoveContainer" containerID="0c64c0d0e7cb9c67659683fd409c5df380d4306bad73f135542bbbac665a4a21" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.022257 4719 scope.go:117] "RemoveContainer" containerID="165ff26abbeae2df3951236ff1a0183d540bf8a09fcf9e1d8bc295794872458e" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.039494 4719 scope.go:117] "RemoveContainer" containerID="c5ba1847ace8a421e5ddb4d05ffcd15d84637575b539e21b90176f7a72db8b65" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.058371 4719 scope.go:117] "RemoveContainer" containerID="0c64c0d0e7cb9c67659683fd409c5df380d4306bad73f135542bbbac665a4a21" Nov 24 08:57:46 crc kubenswrapper[4719]: E1124 08:57:46.058971 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c64c0d0e7cb9c67659683fd409c5df380d4306bad73f135542bbbac665a4a21\": container with ID starting with 0c64c0d0e7cb9c67659683fd409c5df380d4306bad73f135542bbbac665a4a21 not found: ID does not exist" containerID="0c64c0d0e7cb9c67659683fd409c5df380d4306bad73f135542bbbac665a4a21" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.059007 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c64c0d0e7cb9c67659683fd409c5df380d4306bad73f135542bbbac665a4a21"} err="failed to get container status \"0c64c0d0e7cb9c67659683fd409c5df380d4306bad73f135542bbbac665a4a21\": rpc error: code = NotFound desc = could not find container \"0c64c0d0e7cb9c67659683fd409c5df380d4306bad73f135542bbbac665a4a21\": container with ID starting with 0c64c0d0e7cb9c67659683fd409c5df380d4306bad73f135542bbbac665a4a21 not found: ID does not exist" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.059059 4719 scope.go:117] "RemoveContainer" containerID="165ff26abbeae2df3951236ff1a0183d540bf8a09fcf9e1d8bc295794872458e" Nov 24 08:57:46 crc kubenswrapper[4719]: E1124 08:57:46.059341 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"165ff26abbeae2df3951236ff1a0183d540bf8a09fcf9e1d8bc295794872458e\": container with ID starting with 165ff26abbeae2df3951236ff1a0183d540bf8a09fcf9e1d8bc295794872458e not found: ID does not exist" containerID="165ff26abbeae2df3951236ff1a0183d540bf8a09fcf9e1d8bc295794872458e" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.059367 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"165ff26abbeae2df3951236ff1a0183d540bf8a09fcf9e1d8bc295794872458e"} err="failed to get container status \"165ff26abbeae2df3951236ff1a0183d540bf8a09fcf9e1d8bc295794872458e\": rpc error: code = NotFound desc = could not find container \"165ff26abbeae2df3951236ff1a0183d540bf8a09fcf9e1d8bc295794872458e\": container with ID starting with 165ff26abbeae2df3951236ff1a0183d540bf8a09fcf9e1d8bc295794872458e not found: ID does not exist" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.059385 4719 scope.go:117] "RemoveContainer" containerID="c5ba1847ace8a421e5ddb4d05ffcd15d84637575b539e21b90176f7a72db8b65" Nov 24 08:57:46 crc kubenswrapper[4719]: E1124 08:57:46.059687 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5ba1847ace8a421e5ddb4d05ffcd15d84637575b539e21b90176f7a72db8b65\": container with ID starting with c5ba1847ace8a421e5ddb4d05ffcd15d84637575b539e21b90176f7a72db8b65 not found: ID does not exist" containerID="c5ba1847ace8a421e5ddb4d05ffcd15d84637575b539e21b90176f7a72db8b65" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.059708 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5ba1847ace8a421e5ddb4d05ffcd15d84637575b539e21b90176f7a72db8b65"} err="failed to get container status \"c5ba1847ace8a421e5ddb4d05ffcd15d84637575b539e21b90176f7a72db8b65\": rpc error: code = NotFound desc = could not find container \"c5ba1847ace8a421e5ddb4d05ffcd15d84637575b539e21b90176f7a72db8b65\": container with ID starting with c5ba1847ace8a421e5ddb4d05ffcd15d84637575b539e21b90176f7a72db8b65 not found: ID does not exist" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.126467 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sw8vr"] Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.133788 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sw8vr"] Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.142207 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mjzxt"] Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.152571 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mjzxt"] Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.526998 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" path="/var/lib/kubelet/pods/30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5/volumes" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.527969 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76540cf5-0cd5-4282-b3c3-dd12105f0d4e" path="/var/lib/kubelet/pods/76540cf5-0cd5-4282-b3c3-dd12105f0d4e/volumes" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.528579 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d599ee52-0a8d-4f3b-8ffe-624b8d580382" path="/var/lib/kubelet/pods/d599ee52-0a8d-4f3b-8ffe-624b8d580382/volumes" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.529925 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" path="/var/lib/kubelet/pods/f4dd48f4-5b1b-4e66-9a2a-38d5005672b3/volumes" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.530679 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff5bf07f-1775-4310-a0b3-5306a4202228" path="/var/lib/kubelet/pods/ff5bf07f-1775-4310-a0b3-5306a4202228/volumes" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.807135 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" event={"ID":"304abde6-d85e-4425-93f5-af2b501ab1c9","Type":"ContainerStarted","Data":"6b6df705c81ca9423b7596d925baaabab0d6d6d3b96ce1c4b64fecde3a2154be"} Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.807519 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.807532 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" event={"ID":"304abde6-d85e-4425-93f5-af2b501ab1c9","Type":"ContainerStarted","Data":"6a4a4d368bacfb544c3ff18adfe81e204d8cf672112ff5d06ca3cc408d2d83fc"} Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.810184 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" Nov 24 08:57:46 crc kubenswrapper[4719]: I1124 08:57:46.830254 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mlglm" podStartSLOduration=1.8302323889999998 podStartE2EDuration="1.830232389s" podCreationTimestamp="2025-11-24 08:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 08:57:46.829438075 +0000 UTC m=+243.160711327" watchObservedRunningTime="2025-11-24 08:57:46.830232389 +0000 UTC m=+243.161505651" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.087990 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k8b6n"] Nov 24 08:57:47 crc kubenswrapper[4719]: E1124 08:57:47.088246 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d599ee52-0a8d-4f3b-8ffe-624b8d580382" containerName="registry-server" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088262 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d599ee52-0a8d-4f3b-8ffe-624b8d580382" containerName="registry-server" Nov 24 08:57:47 crc kubenswrapper[4719]: E1124 08:57:47.088280 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" containerName="extract-utilities" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088288 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" containerName="extract-utilities" Nov 24 08:57:47 crc kubenswrapper[4719]: E1124 08:57:47.088298 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d599ee52-0a8d-4f3b-8ffe-624b8d580382" containerName="extract-utilities" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088305 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d599ee52-0a8d-4f3b-8ffe-624b8d580382" containerName="extract-utilities" Nov 24 08:57:47 crc kubenswrapper[4719]: E1124 08:57:47.088315 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5bf07f-1775-4310-a0b3-5306a4202228" containerName="extract-utilities" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088322 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5bf07f-1775-4310-a0b3-5306a4202228" containerName="extract-utilities" Nov 24 08:57:47 crc kubenswrapper[4719]: E1124 08:57:47.088330 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" containerName="extract-content" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088337 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" containerName="extract-content" Nov 24 08:57:47 crc kubenswrapper[4719]: E1124 08:57:47.088348 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" containerName="extract-content" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088357 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" containerName="extract-content" Nov 24 08:57:47 crc kubenswrapper[4719]: E1124 08:57:47.088365 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76540cf5-0cd5-4282-b3c3-dd12105f0d4e" containerName="marketplace-operator" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088373 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76540cf5-0cd5-4282-b3c3-dd12105f0d4e" containerName="marketplace-operator" Nov 24 08:57:47 crc kubenswrapper[4719]: E1124 08:57:47.088382 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" containerName="registry-server" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088390 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" containerName="registry-server" Nov 24 08:57:47 crc kubenswrapper[4719]: E1124 08:57:47.088398 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" containerName="registry-server" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088405 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" containerName="registry-server" Nov 24 08:57:47 crc kubenswrapper[4719]: E1124 08:57:47.088414 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5bf07f-1775-4310-a0b3-5306a4202228" containerName="registry-server" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088421 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5bf07f-1775-4310-a0b3-5306a4202228" containerName="registry-server" Nov 24 08:57:47 crc kubenswrapper[4719]: E1124 08:57:47.088432 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d599ee52-0a8d-4f3b-8ffe-624b8d580382" containerName="extract-content" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088438 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d599ee52-0a8d-4f3b-8ffe-624b8d580382" containerName="extract-content" Nov 24 08:57:47 crc kubenswrapper[4719]: E1124 08:57:47.088448 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" containerName="extract-utilities" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088455 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" containerName="extract-utilities" Nov 24 08:57:47 crc kubenswrapper[4719]: E1124 08:57:47.088468 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5bf07f-1775-4310-a0b3-5306a4202228" containerName="extract-content" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088476 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5bf07f-1775-4310-a0b3-5306a4202228" containerName="extract-content" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088593 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b766c8-c7ab-4e67-93a7-a0c52bfcbfa5" containerName="registry-server" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088608 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4dd48f4-5b1b-4e66-9a2a-38d5005672b3" containerName="registry-server" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088623 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="d599ee52-0a8d-4f3b-8ffe-624b8d580382" containerName="registry-server" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088633 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5bf07f-1775-4310-a0b3-5306a4202228" containerName="registry-server" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.088654 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76540cf5-0cd5-4282-b3c3-dd12105f0d4e" containerName="marketplace-operator" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.089504 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.091925 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.097307 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k8b6n"] Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.164271 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fa8925-5590-43e3-b4a1-4c1bda621334-utilities\") pod \"certified-operators-k8b6n\" (UID: \"d4fa8925-5590-43e3-b4a1-4c1bda621334\") " pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.164322 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fa8925-5590-43e3-b4a1-4c1bda621334-catalog-content\") pod \"certified-operators-k8b6n\" (UID: \"d4fa8925-5590-43e3-b4a1-4c1bda621334\") " pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.164370 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-758cz\" (UniqueName: \"kubernetes.io/projected/d4fa8925-5590-43e3-b4a1-4c1bda621334-kube-api-access-758cz\") pod \"certified-operators-k8b6n\" (UID: \"d4fa8925-5590-43e3-b4a1-4c1bda621334\") " pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.265217 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fa8925-5590-43e3-b4a1-4c1bda621334-catalog-content\") pod \"certified-operators-k8b6n\" (UID: \"d4fa8925-5590-43e3-b4a1-4c1bda621334\") " pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.265394 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-758cz\" (UniqueName: \"kubernetes.io/projected/d4fa8925-5590-43e3-b4a1-4c1bda621334-kube-api-access-758cz\") pod \"certified-operators-k8b6n\" (UID: \"d4fa8925-5590-43e3-b4a1-4c1bda621334\") " pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.265442 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fa8925-5590-43e3-b4a1-4c1bda621334-utilities\") pod \"certified-operators-k8b6n\" (UID: \"d4fa8925-5590-43e3-b4a1-4c1bda621334\") " pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.265734 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fa8925-5590-43e3-b4a1-4c1bda621334-catalog-content\") pod \"certified-operators-k8b6n\" (UID: \"d4fa8925-5590-43e3-b4a1-4c1bda621334\") " pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.265835 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fa8925-5590-43e3-b4a1-4c1bda621334-utilities\") pod \"certified-operators-k8b6n\" (UID: \"d4fa8925-5590-43e3-b4a1-4c1bda621334\") " pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.293023 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-758cz\" (UniqueName: \"kubernetes.io/projected/d4fa8925-5590-43e3-b4a1-4c1bda621334-kube-api-access-758cz\") pod \"certified-operators-k8b6n\" (UID: \"d4fa8925-5590-43e3-b4a1-4c1bda621334\") " pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.293696 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vhxvl"] Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.294887 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.297206 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.307793 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhxvl"] Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.366939 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55e4ac5d-677d-41b4-b3c8-adaac9928f7d-catalog-content\") pod \"community-operators-vhxvl\" (UID: \"55e4ac5d-677d-41b4-b3c8-adaac9928f7d\") " pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.367011 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55e4ac5d-677d-41b4-b3c8-adaac9928f7d-utilities\") pod \"community-operators-vhxvl\" (UID: \"55e4ac5d-677d-41b4-b3c8-adaac9928f7d\") " pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.367075 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp7j4\" (UniqueName: \"kubernetes.io/projected/55e4ac5d-677d-41b4-b3c8-adaac9928f7d-kube-api-access-xp7j4\") pod \"community-operators-vhxvl\" (UID: \"55e4ac5d-677d-41b4-b3c8-adaac9928f7d\") " pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.408459 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.467765 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp7j4\" (UniqueName: \"kubernetes.io/projected/55e4ac5d-677d-41b4-b3c8-adaac9928f7d-kube-api-access-xp7j4\") pod \"community-operators-vhxvl\" (UID: \"55e4ac5d-677d-41b4-b3c8-adaac9928f7d\") " pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.468493 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55e4ac5d-677d-41b4-b3c8-adaac9928f7d-catalog-content\") pod \"community-operators-vhxvl\" (UID: \"55e4ac5d-677d-41b4-b3c8-adaac9928f7d\") " pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.468623 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55e4ac5d-677d-41b4-b3c8-adaac9928f7d-utilities\") pod \"community-operators-vhxvl\" (UID: \"55e4ac5d-677d-41b4-b3c8-adaac9928f7d\") " pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.469135 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55e4ac5d-677d-41b4-b3c8-adaac9928f7d-catalog-content\") pod \"community-operators-vhxvl\" (UID: \"55e4ac5d-677d-41b4-b3c8-adaac9928f7d\") " pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.469145 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55e4ac5d-677d-41b4-b3c8-adaac9928f7d-utilities\") pod \"community-operators-vhxvl\" (UID: \"55e4ac5d-677d-41b4-b3c8-adaac9928f7d\") " pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.497062 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp7j4\" (UniqueName: \"kubernetes.io/projected/55e4ac5d-677d-41b4-b3c8-adaac9928f7d-kube-api-access-xp7j4\") pod \"community-operators-vhxvl\" (UID: \"55e4ac5d-677d-41b4-b3c8-adaac9928f7d\") " pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.603647 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k8b6n"] Nov 24 08:57:47 crc kubenswrapper[4719]: W1124 08:57:47.607561 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4fa8925_5590_43e3_b4a1_4c1bda621334.slice/crio-67793e5a352d3b0ef6ac55751df1ab652238135a6d83994fe26cb1ce9c1a9c79 WatchSource:0}: Error finding container 67793e5a352d3b0ef6ac55751df1ab652238135a6d83994fe26cb1ce9c1a9c79: Status 404 returned error can't find the container with id 67793e5a352d3b0ef6ac55751df1ab652238135a6d83994fe26cb1ce9c1a9c79 Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.628529 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.825597 4719 generic.go:334] "Generic (PLEG): container finished" podID="d4fa8925-5590-43e3-b4a1-4c1bda621334" containerID="560473f7a694743fe3a36ea93b6f65b95872fa4ba68a180215f0833337c11394" exitCode=0 Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.825710 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8b6n" event={"ID":"d4fa8925-5590-43e3-b4a1-4c1bda621334","Type":"ContainerDied","Data":"560473f7a694743fe3a36ea93b6f65b95872fa4ba68a180215f0833337c11394"} Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.825740 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8b6n" event={"ID":"d4fa8925-5590-43e3-b4a1-4c1bda621334","Type":"ContainerStarted","Data":"67793e5a352d3b0ef6ac55751df1ab652238135a6d83994fe26cb1ce9c1a9c79"} Nov 24 08:57:47 crc kubenswrapper[4719]: I1124 08:57:47.839427 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhxvl"] Nov 24 08:57:48 crc kubenswrapper[4719]: I1124 08:57:48.832024 4719 generic.go:334] "Generic (PLEG): container finished" podID="55e4ac5d-677d-41b4-b3c8-adaac9928f7d" containerID="8f4aa86895ac185050611bc8d86e79e858ef77ac5e176e258d87679babfb3a79" exitCode=0 Nov 24 08:57:48 crc kubenswrapper[4719]: I1124 08:57:48.832164 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhxvl" event={"ID":"55e4ac5d-677d-41b4-b3c8-adaac9928f7d","Type":"ContainerDied","Data":"8f4aa86895ac185050611bc8d86e79e858ef77ac5e176e258d87679babfb3a79"} Nov 24 08:57:48 crc kubenswrapper[4719]: I1124 08:57:48.832253 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhxvl" event={"ID":"55e4ac5d-677d-41b4-b3c8-adaac9928f7d","Type":"ContainerStarted","Data":"b8b4aa22762414c87ca57bd30bc06c259a352f6e488993b8cf3c36a5b2ad5e8e"} Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.495138 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-54tzs"] Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.497562 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.502642 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.512304 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-54tzs"] Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.598119 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74lf\" (UniqueName: \"kubernetes.io/projected/56525057-4157-4fce-9288-ddae977d1037-kube-api-access-k74lf\") pod \"redhat-marketplace-54tzs\" (UID: \"56525057-4157-4fce-9288-ddae977d1037\") " pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.598442 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56525057-4157-4fce-9288-ddae977d1037-catalog-content\") pod \"redhat-marketplace-54tzs\" (UID: \"56525057-4157-4fce-9288-ddae977d1037\") " pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.598588 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56525057-4157-4fce-9288-ddae977d1037-utilities\") pod \"redhat-marketplace-54tzs\" (UID: \"56525057-4157-4fce-9288-ddae977d1037\") " pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.688383 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zgtch"] Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.689844 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.691983 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.694837 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zgtch"] Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.701989 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56525057-4157-4fce-9288-ddae977d1037-catalog-content\") pod \"redhat-marketplace-54tzs\" (UID: \"56525057-4157-4fce-9288-ddae977d1037\") " pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.702066 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56525057-4157-4fce-9288-ddae977d1037-utilities\") pod \"redhat-marketplace-54tzs\" (UID: \"56525057-4157-4fce-9288-ddae977d1037\") " pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.702151 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k74lf\" (UniqueName: \"kubernetes.io/projected/56525057-4157-4fce-9288-ddae977d1037-kube-api-access-k74lf\") pod \"redhat-marketplace-54tzs\" (UID: \"56525057-4157-4fce-9288-ddae977d1037\") " pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.702554 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56525057-4157-4fce-9288-ddae977d1037-catalog-content\") pod \"redhat-marketplace-54tzs\" (UID: \"56525057-4157-4fce-9288-ddae977d1037\") " pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.702606 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56525057-4157-4fce-9288-ddae977d1037-utilities\") pod \"redhat-marketplace-54tzs\" (UID: \"56525057-4157-4fce-9288-ddae977d1037\") " pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.744644 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k74lf\" (UniqueName: \"kubernetes.io/projected/56525057-4157-4fce-9288-ddae977d1037-kube-api-access-k74lf\") pod \"redhat-marketplace-54tzs\" (UID: \"56525057-4157-4fce-9288-ddae977d1037\") " pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.803662 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d6lc\" (UniqueName: \"kubernetes.io/projected/cbda51de-65a7-4a82-b61a-05ad0766c72d-kube-api-access-7d6lc\") pod \"redhat-operators-zgtch\" (UID: \"cbda51de-65a7-4a82-b61a-05ad0766c72d\") " pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.803709 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbda51de-65a7-4a82-b61a-05ad0766c72d-utilities\") pod \"redhat-operators-zgtch\" (UID: \"cbda51de-65a7-4a82-b61a-05ad0766c72d\") " pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.803827 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbda51de-65a7-4a82-b61a-05ad0766c72d-catalog-content\") pod \"redhat-operators-zgtch\" (UID: \"cbda51de-65a7-4a82-b61a-05ad0766c72d\") " pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.816668 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.840772 4719 generic.go:334] "Generic (PLEG): container finished" podID="d4fa8925-5590-43e3-b4a1-4c1bda621334" containerID="392fd62d6cf37b8ba0b3890318c7dabedad590ff6bfe7a96067d578c4e192208" exitCode=0 Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.840829 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8b6n" event={"ID":"d4fa8925-5590-43e3-b4a1-4c1bda621334","Type":"ContainerDied","Data":"392fd62d6cf37b8ba0b3890318c7dabedad590ff6bfe7a96067d578c4e192208"} Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.853171 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhxvl" event={"ID":"55e4ac5d-677d-41b4-b3c8-adaac9928f7d","Type":"ContainerStarted","Data":"2991b30cf77de167fe756c55cd9892cc2f51c6855ec76317fcb63f3e9205cde6"} Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.904756 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbda51de-65a7-4a82-b61a-05ad0766c72d-catalog-content\") pod \"redhat-operators-zgtch\" (UID: \"cbda51de-65a7-4a82-b61a-05ad0766c72d\") " pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.904823 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d6lc\" (UniqueName: \"kubernetes.io/projected/cbda51de-65a7-4a82-b61a-05ad0766c72d-kube-api-access-7d6lc\") pod \"redhat-operators-zgtch\" (UID: \"cbda51de-65a7-4a82-b61a-05ad0766c72d\") " pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.904850 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbda51de-65a7-4a82-b61a-05ad0766c72d-utilities\") pod \"redhat-operators-zgtch\" (UID: \"cbda51de-65a7-4a82-b61a-05ad0766c72d\") " pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.906002 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbda51de-65a7-4a82-b61a-05ad0766c72d-catalog-content\") pod \"redhat-operators-zgtch\" (UID: \"cbda51de-65a7-4a82-b61a-05ad0766c72d\") " pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.906563 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbda51de-65a7-4a82-b61a-05ad0766c72d-utilities\") pod \"redhat-operators-zgtch\" (UID: \"cbda51de-65a7-4a82-b61a-05ad0766c72d\") " pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:57:49 crc kubenswrapper[4719]: I1124 08:57:49.935969 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d6lc\" (UniqueName: \"kubernetes.io/projected/cbda51de-65a7-4a82-b61a-05ad0766c72d-kube-api-access-7d6lc\") pod \"redhat-operators-zgtch\" (UID: \"cbda51de-65a7-4a82-b61a-05ad0766c72d\") " pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:57:50 crc kubenswrapper[4719]: I1124 08:57:50.013797 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:57:50 crc kubenswrapper[4719]: I1124 08:57:50.237877 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zgtch"] Nov 24 08:57:50 crc kubenswrapper[4719]: I1124 08:57:50.249171 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-54tzs"] Nov 24 08:57:50 crc kubenswrapper[4719]: I1124 08:57:50.862630 4719 generic.go:334] "Generic (PLEG): container finished" podID="cbda51de-65a7-4a82-b61a-05ad0766c72d" containerID="d1ffada7af2b79e77afb76a81abe92c558a7a6fd6c6165d747245763d5893435" exitCode=0 Nov 24 08:57:50 crc kubenswrapper[4719]: I1124 08:57:50.863665 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zgtch" event={"ID":"cbda51de-65a7-4a82-b61a-05ad0766c72d","Type":"ContainerDied","Data":"d1ffada7af2b79e77afb76a81abe92c558a7a6fd6c6165d747245763d5893435"} Nov 24 08:57:50 crc kubenswrapper[4719]: I1124 08:57:50.864607 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zgtch" event={"ID":"cbda51de-65a7-4a82-b61a-05ad0766c72d","Type":"ContainerStarted","Data":"f6f50ea7a5d12ad566d87bb20cb81793afee8f4c16d804cf9adb2586a8ba45b2"} Nov 24 08:57:50 crc kubenswrapper[4719]: I1124 08:57:50.866980 4719 generic.go:334] "Generic (PLEG): container finished" podID="55e4ac5d-677d-41b4-b3c8-adaac9928f7d" containerID="2991b30cf77de167fe756c55cd9892cc2f51c6855ec76317fcb63f3e9205cde6" exitCode=0 Nov 24 08:57:50 crc kubenswrapper[4719]: I1124 08:57:50.867049 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhxvl" event={"ID":"55e4ac5d-677d-41b4-b3c8-adaac9928f7d","Type":"ContainerDied","Data":"2991b30cf77de167fe756c55cd9892cc2f51c6855ec76317fcb63f3e9205cde6"} Nov 24 08:57:50 crc kubenswrapper[4719]: I1124 08:57:50.872662 4719 generic.go:334] "Generic (PLEG): container finished" podID="56525057-4157-4fce-9288-ddae977d1037" containerID="1a47a01504cb6758a6c4fd46cb41ad79f03685190ae71b3663eb3749147c6714" exitCode=0 Nov 24 08:57:50 crc kubenswrapper[4719]: I1124 08:57:50.872695 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54tzs" event={"ID":"56525057-4157-4fce-9288-ddae977d1037","Type":"ContainerDied","Data":"1a47a01504cb6758a6c4fd46cb41ad79f03685190ae71b3663eb3749147c6714"} Nov 24 08:57:50 crc kubenswrapper[4719]: I1124 08:57:50.872714 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54tzs" event={"ID":"56525057-4157-4fce-9288-ddae977d1037","Type":"ContainerStarted","Data":"eb58f1c1cfb89e197e7af4d4d7197173100db7e5f3785480c37d3ba066341b28"} Nov 24 08:57:51 crc kubenswrapper[4719]: I1124 08:57:51.884085 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zgtch" event={"ID":"cbda51de-65a7-4a82-b61a-05ad0766c72d","Type":"ContainerStarted","Data":"0e49187facffc9f97938d78f3a1e5cd1b6bb3757f45aae3e4381cc449710a401"} Nov 24 08:57:51 crc kubenswrapper[4719]: I1124 08:57:51.887442 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhxvl" event={"ID":"55e4ac5d-677d-41b4-b3c8-adaac9928f7d","Type":"ContainerStarted","Data":"afa1cf0f03c78b1b3a0113a87b419b84226d14f25c8ceb2edc92665808916472"} Nov 24 08:57:51 crc kubenswrapper[4719]: I1124 08:57:51.894982 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8b6n" event={"ID":"d4fa8925-5590-43e3-b4a1-4c1bda621334","Type":"ContainerStarted","Data":"c29cd3279e5b46c47f9ffd575050119437cf2812bd326f65ba71db52525d686b"} Nov 24 08:57:51 crc kubenswrapper[4719]: I1124 08:57:51.926454 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vhxvl" podStartSLOduration=2.483885983 podStartE2EDuration="4.926434801s" podCreationTimestamp="2025-11-24 08:57:47 +0000 UTC" firstStartedPulling="2025-11-24 08:57:48.835252954 +0000 UTC m=+245.166526196" lastFinishedPulling="2025-11-24 08:57:51.277801752 +0000 UTC m=+247.609075014" observedRunningTime="2025-11-24 08:57:51.924468791 +0000 UTC m=+248.255742053" watchObservedRunningTime="2025-11-24 08:57:51.926434801 +0000 UTC m=+248.257708053" Nov 24 08:57:51 crc kubenswrapper[4719]: I1124 08:57:51.943923 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k8b6n" podStartSLOduration=2.057046993 podStartE2EDuration="4.943902489s" podCreationTimestamp="2025-11-24 08:57:47 +0000 UTC" firstStartedPulling="2025-11-24 08:57:47.841807046 +0000 UTC m=+244.173080298" lastFinishedPulling="2025-11-24 08:57:50.728662542 +0000 UTC m=+247.059935794" observedRunningTime="2025-11-24 08:57:51.943412504 +0000 UTC m=+248.274685766" watchObservedRunningTime="2025-11-24 08:57:51.943902489 +0000 UTC m=+248.275175761" Nov 24 08:57:52 crc kubenswrapper[4719]: I1124 08:57:52.904519 4719 generic.go:334] "Generic (PLEG): container finished" podID="cbda51de-65a7-4a82-b61a-05ad0766c72d" containerID="0e49187facffc9f97938d78f3a1e5cd1b6bb3757f45aae3e4381cc449710a401" exitCode=0 Nov 24 08:57:52 crc kubenswrapper[4719]: I1124 08:57:52.904578 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zgtch" event={"ID":"cbda51de-65a7-4a82-b61a-05ad0766c72d","Type":"ContainerDied","Data":"0e49187facffc9f97938d78f3a1e5cd1b6bb3757f45aae3e4381cc449710a401"} Nov 24 08:57:53 crc kubenswrapper[4719]: I1124 08:57:53.912545 4719 generic.go:334] "Generic (PLEG): container finished" podID="56525057-4157-4fce-9288-ddae977d1037" containerID="1e29bfda41076075b6e59010fbdc83136287023d46bac38b310fa27a0505de0c" exitCode=0 Nov 24 08:57:53 crc kubenswrapper[4719]: I1124 08:57:53.912957 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54tzs" event={"ID":"56525057-4157-4fce-9288-ddae977d1037","Type":"ContainerDied","Data":"1e29bfda41076075b6e59010fbdc83136287023d46bac38b310fa27a0505de0c"} Nov 24 08:57:54 crc kubenswrapper[4719]: I1124 08:57:54.920285 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zgtch" event={"ID":"cbda51de-65a7-4a82-b61a-05ad0766c72d","Type":"ContainerStarted","Data":"0f2061bd736e4d0d2a510c21309d8b2e532b966789dc97094a33a9dd294cf0cd"} Nov 24 08:57:54 crc kubenswrapper[4719]: I1124 08:57:54.953688 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zgtch" podStartSLOduration=2.9592081820000002 podStartE2EDuration="5.953668143s" podCreationTimestamp="2025-11-24 08:57:49 +0000 UTC" firstStartedPulling="2025-11-24 08:57:50.865519582 +0000 UTC m=+247.196792834" lastFinishedPulling="2025-11-24 08:57:53.859979543 +0000 UTC m=+250.191252795" observedRunningTime="2025-11-24 08:57:54.951825368 +0000 UTC m=+251.283098640" watchObservedRunningTime="2025-11-24 08:57:54.953668143 +0000 UTC m=+251.284941385" Nov 24 08:57:56 crc kubenswrapper[4719]: I1124 08:57:56.931992 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-54tzs" event={"ID":"56525057-4157-4fce-9288-ddae977d1037","Type":"ContainerStarted","Data":"c0446e86b62db87d9b885803211a45de5136b29dda256b0f96c4833da89becff"} Nov 24 08:57:56 crc kubenswrapper[4719]: I1124 08:57:56.967521 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-54tzs" podStartSLOduration=4.214307764 podStartE2EDuration="7.967494124s" podCreationTimestamp="2025-11-24 08:57:49 +0000 UTC" firstStartedPulling="2025-11-24 08:57:50.874249696 +0000 UTC m=+247.205522948" lastFinishedPulling="2025-11-24 08:57:54.627436056 +0000 UTC m=+250.958709308" observedRunningTime="2025-11-24 08:57:56.965304497 +0000 UTC m=+253.296577749" watchObservedRunningTime="2025-11-24 08:57:56.967494124 +0000 UTC m=+253.298767386" Nov 24 08:57:57 crc kubenswrapper[4719]: I1124 08:57:57.408656 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:57 crc kubenswrapper[4719]: I1124 08:57:57.408996 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:57 crc kubenswrapper[4719]: I1124 08:57:57.458614 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:57 crc kubenswrapper[4719]: I1124 08:57:57.628998 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:57 crc kubenswrapper[4719]: I1124 08:57:57.629089 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:57 crc kubenswrapper[4719]: I1124 08:57:57.685473 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:57 crc kubenswrapper[4719]: I1124 08:57:57.979489 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k8b6n" Nov 24 08:57:57 crc kubenswrapper[4719]: I1124 08:57:57.981365 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vhxvl" Nov 24 08:57:59 crc kubenswrapper[4719]: I1124 08:57:59.818602 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:57:59 crc kubenswrapper[4719]: I1124 08:57:59.818907 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:57:59 crc kubenswrapper[4719]: I1124 08:57:59.859574 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:58:00 crc kubenswrapper[4719]: I1124 08:58:00.014409 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:58:00 crc kubenswrapper[4719]: I1124 08:58:00.014448 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:58:00 crc kubenswrapper[4719]: I1124 08:58:00.062581 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:58:00 crc kubenswrapper[4719]: I1124 08:58:00.988169 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 08:58:09 crc kubenswrapper[4719]: I1124 08:58:09.856299 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-54tzs" Nov 24 08:58:44 crc kubenswrapper[4719]: I1124 08:58:44.326309 4719 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.128074 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg"] Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.129194 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.131653 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.131872 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.142853 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg"] Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.219946 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lf5z\" (UniqueName: \"kubernetes.io/projected/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-kube-api-access-5lf5z\") pod \"collect-profiles-29399580-q82qg\" (UID: \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.220164 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-secret-volume\") pod \"collect-profiles-29399580-q82qg\" (UID: \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.220264 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-config-volume\") pod \"collect-profiles-29399580-q82qg\" (UID: \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.321742 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lf5z\" (UniqueName: \"kubernetes.io/projected/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-kube-api-access-5lf5z\") pod \"collect-profiles-29399580-q82qg\" (UID: \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.322125 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-secret-volume\") pod \"collect-profiles-29399580-q82qg\" (UID: \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.322255 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-config-volume\") pod \"collect-profiles-29399580-q82qg\" (UID: \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.323367 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-config-volume\") pod \"collect-profiles-29399580-q82qg\" (UID: \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.331821 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-secret-volume\") pod \"collect-profiles-29399580-q82qg\" (UID: \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.340305 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lf5z\" (UniqueName: \"kubernetes.io/projected/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-kube-api-access-5lf5z\") pod \"collect-profiles-29399580-q82qg\" (UID: \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.450313 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" Nov 24 09:00:00 crc kubenswrapper[4719]: I1124 09:00:00.651202 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg"] Nov 24 09:00:00 crc kubenswrapper[4719]: W1124 09:00:00.666522 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee4ab863_3119_4f56_b1a3_b16105f0b7ed.slice/crio-f3753f3de655fcabaca597797fc254df4a75284dbea6aaea33f2029f1be4cc8a WatchSource:0}: Error finding container f3753f3de655fcabaca597797fc254df4a75284dbea6aaea33f2029f1be4cc8a: Status 404 returned error can't find the container with id f3753f3de655fcabaca597797fc254df4a75284dbea6aaea33f2029f1be4cc8a Nov 24 09:00:01 crc kubenswrapper[4719]: I1124 09:00:01.569193 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" event={"ID":"ee4ab863-3119-4f56-b1a3-b16105f0b7ed","Type":"ContainerDied","Data":"8a3d18ace2fb6cc6fa4d7f7f8739db8e2d1791e46f04074d102ac5b217642a4b"} Nov 24 09:00:01 crc kubenswrapper[4719]: I1124 09:00:01.569678 4719 generic.go:334] "Generic (PLEG): container finished" podID="ee4ab863-3119-4f56-b1a3-b16105f0b7ed" containerID="8a3d18ace2fb6cc6fa4d7f7f8739db8e2d1791e46f04074d102ac5b217642a4b" exitCode=0 Nov 24 09:00:01 crc kubenswrapper[4719]: I1124 09:00:01.569797 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" event={"ID":"ee4ab863-3119-4f56-b1a3-b16105f0b7ed","Type":"ContainerStarted","Data":"f3753f3de655fcabaca597797fc254df4a75284dbea6aaea33f2029f1be4cc8a"} Nov 24 09:00:02 crc kubenswrapper[4719]: I1124 09:00:02.765478 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" Nov 24 09:00:02 crc kubenswrapper[4719]: I1124 09:00:02.852172 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-config-volume\") pod \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\" (UID: \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\") " Nov 24 09:00:02 crc kubenswrapper[4719]: I1124 09:00:02.852231 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-secret-volume\") pod \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\" (UID: \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\") " Nov 24 09:00:02 crc kubenswrapper[4719]: I1124 09:00:02.852289 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lf5z\" (UniqueName: \"kubernetes.io/projected/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-kube-api-access-5lf5z\") pod \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\" (UID: \"ee4ab863-3119-4f56-b1a3-b16105f0b7ed\") " Nov 24 09:00:02 crc kubenswrapper[4719]: I1124 09:00:02.853234 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-config-volume" (OuterVolumeSpecName: "config-volume") pod "ee4ab863-3119-4f56-b1a3-b16105f0b7ed" (UID: "ee4ab863-3119-4f56-b1a3-b16105f0b7ed"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:00:02 crc kubenswrapper[4719]: I1124 09:00:02.857406 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-kube-api-access-5lf5z" (OuterVolumeSpecName: "kube-api-access-5lf5z") pod "ee4ab863-3119-4f56-b1a3-b16105f0b7ed" (UID: "ee4ab863-3119-4f56-b1a3-b16105f0b7ed"). InnerVolumeSpecName "kube-api-access-5lf5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:00:02 crc kubenswrapper[4719]: I1124 09:00:02.858184 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ee4ab863-3119-4f56-b1a3-b16105f0b7ed" (UID: "ee4ab863-3119-4f56-b1a3-b16105f0b7ed"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:00:02 crc kubenswrapper[4719]: I1124 09:00:02.953961 4719 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:02 crc kubenswrapper[4719]: I1124 09:00:02.954011 4719 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:02 crc kubenswrapper[4719]: I1124 09:00:02.954023 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lf5z\" (UniqueName: \"kubernetes.io/projected/ee4ab863-3119-4f56-b1a3-b16105f0b7ed-kube-api-access-5lf5z\") on node \"crc\" DevicePath \"\"" Nov 24 09:00:03 crc kubenswrapper[4719]: I1124 09:00:03.579496 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" event={"ID":"ee4ab863-3119-4f56-b1a3-b16105f0b7ed","Type":"ContainerDied","Data":"f3753f3de655fcabaca597797fc254df4a75284dbea6aaea33f2029f1be4cc8a"} Nov 24 09:00:03 crc kubenswrapper[4719]: I1124 09:00:03.579535 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3753f3de655fcabaca597797fc254df4a75284dbea6aaea33f2029f1be4cc8a" Nov 24 09:00:03 crc kubenswrapper[4719]: I1124 09:00:03.579546 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg" Nov 24 09:00:04 crc kubenswrapper[4719]: I1124 09:00:04.562283 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:00:04 crc kubenswrapper[4719]: I1124 09:00:04.562621 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:00:34 crc kubenswrapper[4719]: I1124 09:00:34.562362 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:00:34 crc kubenswrapper[4719]: I1124 09:00:34.562921 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:01:04 crc kubenswrapper[4719]: I1124 09:01:04.562701 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:01:04 crc kubenswrapper[4719]: I1124 09:01:04.563689 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:01:04 crc kubenswrapper[4719]: I1124 09:01:04.563763 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 09:01:04 crc kubenswrapper[4719]: I1124 09:01:04.564680 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"59c0a5952ea2845b8905fda1f05065d95523ac4e448325b0905c9139c8ad7b5a"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:01:04 crc kubenswrapper[4719]: I1124 09:01:04.564751 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://59c0a5952ea2845b8905fda1f05065d95523ac4e448325b0905c9139c8ad7b5a" gracePeriod=600 Nov 24 09:01:05 crc kubenswrapper[4719]: I1124 09:01:05.001307 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"59c0a5952ea2845b8905fda1f05065d95523ac4e448325b0905c9139c8ad7b5a"} Nov 24 09:01:05 crc kubenswrapper[4719]: I1124 09:01:05.001330 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="59c0a5952ea2845b8905fda1f05065d95523ac4e448325b0905c9139c8ad7b5a" exitCode=0 Nov 24 09:01:05 crc kubenswrapper[4719]: I1124 09:01:05.001641 4719 scope.go:117] "RemoveContainer" containerID="6ee7db23d1a2e2e34de90f9ae5a845fe4f5931ebbc021340620147bef0c2a10c" Nov 24 09:01:05 crc kubenswrapper[4719]: I1124 09:01:05.001656 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"866b965215eb055030d3994c07592f9bfb5c1f1196954930e0485b0a35bdf8f1"} Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.235635 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k8j62"] Nov 24 09:01:40 crc kubenswrapper[4719]: E1124 09:01:40.236295 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4ab863-3119-4f56-b1a3-b16105f0b7ed" containerName="collect-profiles" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.236308 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4ab863-3119-4f56-b1a3-b16105f0b7ed" containerName="collect-profiles" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.236402 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4ab863-3119-4f56-b1a3-b16105f0b7ed" containerName="collect-profiles" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.236764 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.251455 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k8j62"] Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.345720 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb9d1d11-f748-4938-aa71-ea96dab7391c-registry-certificates\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.345776 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb9d1d11-f748-4938-aa71-ea96dab7391c-trusted-ca\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.345839 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gbc8\" (UniqueName: \"kubernetes.io/projected/bb9d1d11-f748-4938-aa71-ea96dab7391c-kube-api-access-8gbc8\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.345857 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb9d1d11-f748-4938-aa71-ea96dab7391c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.345889 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb9d1d11-f748-4938-aa71-ea96dab7391c-registry-tls\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.345926 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.345944 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb9d1d11-f748-4938-aa71-ea96dab7391c-bound-sa-token\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.345960 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb9d1d11-f748-4938-aa71-ea96dab7391c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.399083 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.447339 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb9d1d11-f748-4938-aa71-ea96dab7391c-bound-sa-token\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.447392 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb9d1d11-f748-4938-aa71-ea96dab7391c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.447419 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb9d1d11-f748-4938-aa71-ea96dab7391c-registry-certificates\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.447436 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb9d1d11-f748-4938-aa71-ea96dab7391c-trusted-ca\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.447483 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gbc8\" (UniqueName: \"kubernetes.io/projected/bb9d1d11-f748-4938-aa71-ea96dab7391c-kube-api-access-8gbc8\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.447499 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb9d1d11-f748-4938-aa71-ea96dab7391c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.447533 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb9d1d11-f748-4938-aa71-ea96dab7391c-registry-tls\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.448015 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb9d1d11-f748-4938-aa71-ea96dab7391c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.448777 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb9d1d11-f748-4938-aa71-ea96dab7391c-registry-certificates\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.448961 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb9d1d11-f748-4938-aa71-ea96dab7391c-trusted-ca\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.452963 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb9d1d11-f748-4938-aa71-ea96dab7391c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.453935 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb9d1d11-f748-4938-aa71-ea96dab7391c-registry-tls\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.462056 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gbc8\" (UniqueName: \"kubernetes.io/projected/bb9d1d11-f748-4938-aa71-ea96dab7391c-kube-api-access-8gbc8\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.463683 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb9d1d11-f748-4938-aa71-ea96dab7391c-bound-sa-token\") pod \"image-registry-66df7c8f76-k8j62\" (UID: \"bb9d1d11-f748-4938-aa71-ea96dab7391c\") " pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.551258 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:40 crc kubenswrapper[4719]: I1124 09:01:40.930645 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k8j62"] Nov 24 09:01:41 crc kubenswrapper[4719]: I1124 09:01:41.189300 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" event={"ID":"bb9d1d11-f748-4938-aa71-ea96dab7391c","Type":"ContainerStarted","Data":"ded0a8d118b389dc7355cf9c1f44c80a4c057006c0615373563acd75f2800b1d"} Nov 24 09:01:41 crc kubenswrapper[4719]: I1124 09:01:41.189349 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" event={"ID":"bb9d1d11-f748-4938-aa71-ea96dab7391c","Type":"ContainerStarted","Data":"4b2d259fb186cbbdf5be8f68b5b8ab3728a9ed208aa7a3f9a948b39a704a2ce5"} Nov 24 09:01:41 crc kubenswrapper[4719]: I1124 09:01:41.189444 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:01:41 crc kubenswrapper[4719]: I1124 09:01:41.211910 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" podStartSLOduration=1.211888977 podStartE2EDuration="1.211888977s" podCreationTimestamp="2025-11-24 09:01:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:01:41.210324571 +0000 UTC m=+477.541597833" watchObservedRunningTime="2025-11-24 09:01:41.211888977 +0000 UTC m=+477.543162229" Nov 24 09:02:00 crc kubenswrapper[4719]: I1124 09:02:00.557577 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-k8j62" Nov 24 09:02:00 crc kubenswrapper[4719]: I1124 09:02:00.611334 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j26j4"] Nov 24 09:02:25 crc kubenswrapper[4719]: I1124 09:02:25.675294 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" podUID="6739d077-6441-4b90-8e23-be9b0e3cb12a" containerName="registry" containerID="cri-o://ee9704c64b685f4d29ec4fb2f8d566b62c3d8e9edfd61e6b5b05d2033101bb43" gracePeriod=30 Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.029399 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.163296 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"6739d077-6441-4b90-8e23-be9b0e3cb12a\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.163362 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-registry-tls\") pod \"6739d077-6441-4b90-8e23-be9b0e3cb12a\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.163413 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6739d077-6441-4b90-8e23-be9b0e3cb12a-installation-pull-secrets\") pod \"6739d077-6441-4b90-8e23-be9b0e3cb12a\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.163445 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6739d077-6441-4b90-8e23-be9b0e3cb12a-registry-certificates\") pod \"6739d077-6441-4b90-8e23-be9b0e3cb12a\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.163496 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzmtk\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-kube-api-access-bzmtk\") pod \"6739d077-6441-4b90-8e23-be9b0e3cb12a\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.163520 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6739d077-6441-4b90-8e23-be9b0e3cb12a-trusted-ca\") pod \"6739d077-6441-4b90-8e23-be9b0e3cb12a\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.163568 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6739d077-6441-4b90-8e23-be9b0e3cb12a-ca-trust-extracted\") pod \"6739d077-6441-4b90-8e23-be9b0e3cb12a\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.163596 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-bound-sa-token\") pod \"6739d077-6441-4b90-8e23-be9b0e3cb12a\" (UID: \"6739d077-6441-4b90-8e23-be9b0e3cb12a\") " Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.164737 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6739d077-6441-4b90-8e23-be9b0e3cb12a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "6739d077-6441-4b90-8e23-be9b0e3cb12a" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.164841 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6739d077-6441-4b90-8e23-be9b0e3cb12a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "6739d077-6441-4b90-8e23-be9b0e3cb12a" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.170631 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-kube-api-access-bzmtk" (OuterVolumeSpecName: "kube-api-access-bzmtk") pod "6739d077-6441-4b90-8e23-be9b0e3cb12a" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a"). InnerVolumeSpecName "kube-api-access-bzmtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.172377 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6739d077-6441-4b90-8e23-be9b0e3cb12a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "6739d077-6441-4b90-8e23-be9b0e3cb12a" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.172437 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "6739d077-6441-4b90-8e23-be9b0e3cb12a" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.173415 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "6739d077-6441-4b90-8e23-be9b0e3cb12a" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.174487 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "6739d077-6441-4b90-8e23-be9b0e3cb12a" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.181744 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6739d077-6441-4b90-8e23-be9b0e3cb12a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "6739d077-6441-4b90-8e23-be9b0e3cb12a" (UID: "6739d077-6441-4b90-8e23-be9b0e3cb12a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.264512 4719 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6739d077-6441-4b90-8e23-be9b0e3cb12a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.264545 4719 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.264555 4719 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.264564 4719 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6739d077-6441-4b90-8e23-be9b0e3cb12a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.264574 4719 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6739d077-6441-4b90-8e23-be9b0e3cb12a-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.264582 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzmtk\" (UniqueName: \"kubernetes.io/projected/6739d077-6441-4b90-8e23-be9b0e3cb12a-kube-api-access-bzmtk\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.264590 4719 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6739d077-6441-4b90-8e23-be9b0e3cb12a-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.416002 4719 generic.go:334] "Generic (PLEG): container finished" podID="6739d077-6441-4b90-8e23-be9b0e3cb12a" containerID="ee9704c64b685f4d29ec4fb2f8d566b62c3d8e9edfd61e6b5b05d2033101bb43" exitCode=0 Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.416089 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" event={"ID":"6739d077-6441-4b90-8e23-be9b0e3cb12a","Type":"ContainerDied","Data":"ee9704c64b685f4d29ec4fb2f8d566b62c3d8e9edfd61e6b5b05d2033101bb43"} Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.416123 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.416340 4719 scope.go:117] "RemoveContainer" containerID="ee9704c64b685f4d29ec4fb2f8d566b62c3d8e9edfd61e6b5b05d2033101bb43" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.416321 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j26j4" event={"ID":"6739d077-6441-4b90-8e23-be9b0e3cb12a","Type":"ContainerDied","Data":"5806835d62c644405cbe2d39b70e86e45f5f0c318f4eb13380accaee23dbc20d"} Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.434562 4719 scope.go:117] "RemoveContainer" containerID="ee9704c64b685f4d29ec4fb2f8d566b62c3d8e9edfd61e6b5b05d2033101bb43" Nov 24 09:02:26 crc kubenswrapper[4719]: E1124 09:02:26.434896 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9704c64b685f4d29ec4fb2f8d566b62c3d8e9edfd61e6b5b05d2033101bb43\": container with ID starting with ee9704c64b685f4d29ec4fb2f8d566b62c3d8e9edfd61e6b5b05d2033101bb43 not found: ID does not exist" containerID="ee9704c64b685f4d29ec4fb2f8d566b62c3d8e9edfd61e6b5b05d2033101bb43" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.434972 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9704c64b685f4d29ec4fb2f8d566b62c3d8e9edfd61e6b5b05d2033101bb43"} err="failed to get container status \"ee9704c64b685f4d29ec4fb2f8d566b62c3d8e9edfd61e6b5b05d2033101bb43\": rpc error: code = NotFound desc = could not find container \"ee9704c64b685f4d29ec4fb2f8d566b62c3d8e9edfd61e6b5b05d2033101bb43\": container with ID starting with ee9704c64b685f4d29ec4fb2f8d566b62c3d8e9edfd61e6b5b05d2033101bb43 not found: ID does not exist" Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.455006 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j26j4"] Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.459386 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j26j4"] Nov 24 09:02:26 crc kubenswrapper[4719]: I1124 09:02:26.528327 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6739d077-6441-4b90-8e23-be9b0e3cb12a" path="/var/lib/kubelet/pods/6739d077-6441-4b90-8e23-be9b0e3cb12a/volumes" Nov 24 09:03:04 crc kubenswrapper[4719]: I1124 09:03:04.561563 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:03:04 crc kubenswrapper[4719]: I1124 09:03:04.562150 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:03:34 crc kubenswrapper[4719]: I1124 09:03:34.562311 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:03:34 crc kubenswrapper[4719]: I1124 09:03:34.562808 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:04:04 crc kubenswrapper[4719]: I1124 09:04:04.561378 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:04:04 crc kubenswrapper[4719]: I1124 09:04:04.561791 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:04:04 crc kubenswrapper[4719]: I1124 09:04:04.561841 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 09:04:04 crc kubenswrapper[4719]: I1124 09:04:04.562387 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"866b965215eb055030d3994c07592f9bfb5c1f1196954930e0485b0a35bdf8f1"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:04:04 crc kubenswrapper[4719]: I1124 09:04:04.562442 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://866b965215eb055030d3994c07592f9bfb5c1f1196954930e0485b0a35bdf8f1" gracePeriod=600 Nov 24 09:04:04 crc kubenswrapper[4719]: I1124 09:04:04.931125 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="866b965215eb055030d3994c07592f9bfb5c1f1196954930e0485b0a35bdf8f1" exitCode=0 Nov 24 09:04:04 crc kubenswrapper[4719]: I1124 09:04:04.931325 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"866b965215eb055030d3994c07592f9bfb5c1f1196954930e0485b0a35bdf8f1"} Nov 24 09:04:04 crc kubenswrapper[4719]: I1124 09:04:04.931427 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"e9bafa1ff8cebfd6f7a09482f5227abe69557f213f9dda16fe6ddb7212992d3f"} Nov 24 09:04:04 crc kubenswrapper[4719]: I1124 09:04:04.931445 4719 scope.go:117] "RemoveContainer" containerID="59c0a5952ea2845b8905fda1f05065d95523ac4e448325b0905c9139c8ad7b5a" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.567589 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qg4fz"] Nov 24 09:04:08 crc kubenswrapper[4719]: E1124 09:04:08.568366 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6739d077-6441-4b90-8e23-be9b0e3cb12a" containerName="registry" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.568379 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="6739d077-6441-4b90-8e23-be9b0e3cb12a" containerName="registry" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.568493 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="6739d077-6441-4b90-8e23-be9b0e3cb12a" containerName="registry" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.568956 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-qg4fz" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.570578 4719 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-fmwgs" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.571392 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.571458 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.577851 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qg4fz"] Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.593560 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rwrqz"] Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.594428 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-rwrqz" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.596986 4719 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-p85nl" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.606848 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-w9hp2"] Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.607700 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-w9hp2" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.610152 4719 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-pqxmr" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.619025 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-w9hp2"] Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.623113 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rwrqz"] Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.665961 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9rhs\" (UniqueName: \"kubernetes.io/projected/6810bbaf-a058-4255-a776-13435cfd7f16-kube-api-access-l9rhs\") pod \"cert-manager-5b446d88c5-rwrqz\" (UID: \"6810bbaf-a058-4255-a776-13435cfd7f16\") " pod="cert-manager/cert-manager-5b446d88c5-rwrqz" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.666017 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7gfl\" (UniqueName: \"kubernetes.io/projected/2e8b2163-ffd6-4935-a172-bdae97882475-kube-api-access-x7gfl\") pod \"cert-manager-cainjector-7f985d654d-qg4fz\" (UID: \"2e8b2163-ffd6-4935-a172-bdae97882475\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qg4fz" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.766911 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9rhs\" (UniqueName: \"kubernetes.io/projected/6810bbaf-a058-4255-a776-13435cfd7f16-kube-api-access-l9rhs\") pod \"cert-manager-5b446d88c5-rwrqz\" (UID: \"6810bbaf-a058-4255-a776-13435cfd7f16\") " pod="cert-manager/cert-manager-5b446d88c5-rwrqz" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.766973 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7gfl\" (UniqueName: \"kubernetes.io/projected/2e8b2163-ffd6-4935-a172-bdae97882475-kube-api-access-x7gfl\") pod \"cert-manager-cainjector-7f985d654d-qg4fz\" (UID: \"2e8b2163-ffd6-4935-a172-bdae97882475\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qg4fz" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.767010 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5gvh\" (UniqueName: \"kubernetes.io/projected/55b792be-fd7f-49c7-b9c9-e90acd66701a-kube-api-access-p5gvh\") pod \"cert-manager-webhook-5655c58dd6-w9hp2\" (UID: \"55b792be-fd7f-49c7-b9c9-e90acd66701a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-w9hp2" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.785551 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9rhs\" (UniqueName: \"kubernetes.io/projected/6810bbaf-a058-4255-a776-13435cfd7f16-kube-api-access-l9rhs\") pod \"cert-manager-5b446d88c5-rwrqz\" (UID: \"6810bbaf-a058-4255-a776-13435cfd7f16\") " pod="cert-manager/cert-manager-5b446d88c5-rwrqz" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.788059 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7gfl\" (UniqueName: \"kubernetes.io/projected/2e8b2163-ffd6-4935-a172-bdae97882475-kube-api-access-x7gfl\") pod \"cert-manager-cainjector-7f985d654d-qg4fz\" (UID: \"2e8b2163-ffd6-4935-a172-bdae97882475\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qg4fz" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.867690 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5gvh\" (UniqueName: \"kubernetes.io/projected/55b792be-fd7f-49c7-b9c9-e90acd66701a-kube-api-access-p5gvh\") pod \"cert-manager-webhook-5655c58dd6-w9hp2\" (UID: \"55b792be-fd7f-49c7-b9c9-e90acd66701a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-w9hp2" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.885406 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5gvh\" (UniqueName: \"kubernetes.io/projected/55b792be-fd7f-49c7-b9c9-e90acd66701a-kube-api-access-p5gvh\") pod \"cert-manager-webhook-5655c58dd6-w9hp2\" (UID: \"55b792be-fd7f-49c7-b9c9-e90acd66701a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-w9hp2" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.889263 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-qg4fz" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.905613 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-rwrqz" Nov 24 09:04:08 crc kubenswrapper[4719]: I1124 09:04:08.920491 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-w9hp2" Nov 24 09:04:09 crc kubenswrapper[4719]: I1124 09:04:09.185914 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-w9hp2"] Nov 24 09:04:09 crc kubenswrapper[4719]: W1124 09:04:09.188866 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55b792be_fd7f_49c7_b9c9_e90acd66701a.slice/crio-644246c3682466f25360396ad71e6bf6b91f5cd3753777e559c877406504b7b8 WatchSource:0}: Error finding container 644246c3682466f25360396ad71e6bf6b91f5cd3753777e559c877406504b7b8: Status 404 returned error can't find the container with id 644246c3682466f25360396ad71e6bf6b91f5cd3753777e559c877406504b7b8 Nov 24 09:04:09 crc kubenswrapper[4719]: I1124 09:04:09.191504 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:04:09 crc kubenswrapper[4719]: I1124 09:04:09.326749 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rwrqz"] Nov 24 09:04:09 crc kubenswrapper[4719]: I1124 09:04:09.329739 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qg4fz"] Nov 24 09:04:09 crc kubenswrapper[4719]: W1124 09:04:09.337088 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e8b2163_ffd6_4935_a172_bdae97882475.slice/crio-c3b691236d5bcc1c3e971f53aae93ebab4e5d46e836902b37195c831946bd2b8 WatchSource:0}: Error finding container c3b691236d5bcc1c3e971f53aae93ebab4e5d46e836902b37195c831946bd2b8: Status 404 returned error can't find the container with id c3b691236d5bcc1c3e971f53aae93ebab4e5d46e836902b37195c831946bd2b8 Nov 24 09:04:09 crc kubenswrapper[4719]: W1124 09:04:09.340979 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6810bbaf_a058_4255_a776_13435cfd7f16.slice/crio-08981372ea35a1fe2c912540ee1d3fbe625a2d7d230346860e406c5330752c95 WatchSource:0}: Error finding container 08981372ea35a1fe2c912540ee1d3fbe625a2d7d230346860e406c5330752c95: Status 404 returned error can't find the container with id 08981372ea35a1fe2c912540ee1d3fbe625a2d7d230346860e406c5330752c95 Nov 24 09:04:09 crc kubenswrapper[4719]: I1124 09:04:09.977552 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-w9hp2" event={"ID":"55b792be-fd7f-49c7-b9c9-e90acd66701a","Type":"ContainerStarted","Data":"644246c3682466f25360396ad71e6bf6b91f5cd3753777e559c877406504b7b8"} Nov 24 09:04:09 crc kubenswrapper[4719]: I1124 09:04:09.978904 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-qg4fz" event={"ID":"2e8b2163-ffd6-4935-a172-bdae97882475","Type":"ContainerStarted","Data":"c3b691236d5bcc1c3e971f53aae93ebab4e5d46e836902b37195c831946bd2b8"} Nov 24 09:04:09 crc kubenswrapper[4719]: I1124 09:04:09.980081 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-rwrqz" event={"ID":"6810bbaf-a058-4255-a776-13435cfd7f16","Type":"ContainerStarted","Data":"08981372ea35a1fe2c912540ee1d3fbe625a2d7d230346860e406c5330752c95"} Nov 24 09:04:12 crc kubenswrapper[4719]: I1124 09:04:12.996254 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-qg4fz" event={"ID":"2e8b2163-ffd6-4935-a172-bdae97882475","Type":"ContainerStarted","Data":"8eddcd540206344a0388b9fb9ca4eaccaf04ea2cc8136b184be46e703c39958b"} Nov 24 09:04:12 crc kubenswrapper[4719]: I1124 09:04:12.997666 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-rwrqz" event={"ID":"6810bbaf-a058-4255-a776-13435cfd7f16","Type":"ContainerStarted","Data":"82185dd5e01e78f73eaafc07531064219ef9c0e98bef0612c1145d69ded2f26c"} Nov 24 09:04:12 crc kubenswrapper[4719]: I1124 09:04:12.998890 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-w9hp2" event={"ID":"55b792be-fd7f-49c7-b9c9-e90acd66701a","Type":"ContainerStarted","Data":"1f029b6b8f76b595a38de42b2419524748c10b2dc8984ff7498395ad307ac796"} Nov 24 09:04:12 crc kubenswrapper[4719]: I1124 09:04:12.999082 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-w9hp2" Nov 24 09:04:13 crc kubenswrapper[4719]: I1124 09:04:13.026530 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-w9hp2" podStartSLOduration=1.9758713129999999 podStartE2EDuration="5.02650008s" podCreationTimestamp="2025-11-24 09:04:08 +0000 UTC" firstStartedPulling="2025-11-24 09:04:09.191184677 +0000 UTC m=+625.522457929" lastFinishedPulling="2025-11-24 09:04:12.241813444 +0000 UTC m=+628.573086696" observedRunningTime="2025-11-24 09:04:13.025508311 +0000 UTC m=+629.356781573" watchObservedRunningTime="2025-11-24 09:04:13.02650008 +0000 UTC m=+629.357773332" Nov 24 09:04:13 crc kubenswrapper[4719]: I1124 09:04:13.028304 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-qg4fz" podStartSLOduration=2.125025358 podStartE2EDuration="5.028295433s" podCreationTimestamp="2025-11-24 09:04:08 +0000 UTC" firstStartedPulling="2025-11-24 09:04:09.339005872 +0000 UTC m=+625.670279114" lastFinishedPulling="2025-11-24 09:04:12.242275937 +0000 UTC m=+628.573549189" observedRunningTime="2025-11-24 09:04:13.013802838 +0000 UTC m=+629.345076100" watchObservedRunningTime="2025-11-24 09:04:13.028295433 +0000 UTC m=+629.359568685" Nov 24 09:04:13 crc kubenswrapper[4719]: I1124 09:04:13.044244 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-rwrqz" podStartSLOduration=2.076292419 podStartE2EDuration="5.04422727s" podCreationTimestamp="2025-11-24 09:04:08 +0000 UTC" firstStartedPulling="2025-11-24 09:04:09.342568796 +0000 UTC m=+625.673842048" lastFinishedPulling="2025-11-24 09:04:12.310503647 +0000 UTC m=+628.641776899" observedRunningTime="2025-11-24 09:04:13.041697866 +0000 UTC m=+629.372971118" watchObservedRunningTime="2025-11-24 09:04:13.04422727 +0000 UTC m=+629.375500522" Nov 24 09:04:18 crc kubenswrapper[4719]: I1124 09:04:18.924122 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-w9hp2" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.121189 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fvqzq"] Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.121615 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovn-controller" containerID="cri-o://38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4" gracePeriod=30 Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.121984 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="sbdb" containerID="cri-o://82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d" gracePeriod=30 Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.122029 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="nbdb" containerID="cri-o://e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec" gracePeriod=30 Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.122098 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="northd" containerID="cri-o://a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d" gracePeriod=30 Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.122143 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e" gracePeriod=30 Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.122187 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="kube-rbac-proxy-node" containerID="cri-o://83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6" gracePeriod=30 Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.122243 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovn-acl-logging" containerID="cri-o://4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd" gracePeriod=30 Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.160689 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" containerID="cri-o://6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4" gracePeriod=30 Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.872482 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/3.log" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.874961 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovn-acl-logging/0.log" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.875576 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovn-controller/0.log" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.875985 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.946955 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9fzs7"] Nov 24 09:04:19 crc kubenswrapper[4719]: E1124 09:04:19.947217 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovn-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947232 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovn-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: E1124 09:04:19.947246 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947254 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: E1124 09:04:19.947264 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="northd" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947271 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="northd" Nov 24 09:04:19 crc kubenswrapper[4719]: E1124 09:04:19.947279 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="kubecfg-setup" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947286 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="kubecfg-setup" Nov 24 09:04:19 crc kubenswrapper[4719]: E1124 09:04:19.947297 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947305 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: E1124 09:04:19.947314 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="kube-rbac-proxy-node" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947321 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="kube-rbac-proxy-node" Nov 24 09:04:19 crc kubenswrapper[4719]: E1124 09:04:19.947334 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947341 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 09:04:19 crc kubenswrapper[4719]: E1124 09:04:19.947354 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="sbdb" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947361 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="sbdb" Nov 24 09:04:19 crc kubenswrapper[4719]: E1124 09:04:19.947373 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947380 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: E1124 09:04:19.947390 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947397 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: E1124 09:04:19.947408 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovn-acl-logging" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947415 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovn-acl-logging" Nov 24 09:04:19 crc kubenswrapper[4719]: E1124 09:04:19.947426 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="nbdb" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947434 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="nbdb" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947568 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947581 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947591 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947602 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="nbdb" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947611 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="northd" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947627 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="sbdb" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947635 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovn-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947645 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="kube-rbac-proxy-node" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947657 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovn-acl-logging" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947665 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 09:04:19 crc kubenswrapper[4719]: E1124 09:04:19.947771 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947782 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947898 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.947909 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerName="ovnkube-controller" Nov 24 09:04:19 crc kubenswrapper[4719]: I1124 09:04:19.949900 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015049 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7xp6\" (UniqueName: \"kubernetes.io/projected/76442e88-72e2-4a86-99b4-bd07f0490aa9-kube-api-access-f7xp6\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015106 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-run-netns\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015136 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovnkube-script-lib\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015174 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-env-overrides\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015197 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-node-log\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015217 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-systemd\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015233 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-cni-netd\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015263 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovnkube-config\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015279 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015299 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovn-node-metrics-cert\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015319 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-systemd-units\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015318 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015341 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-kubelet\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015381 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015421 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-etc-openvswitch\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015471 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-ovn\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015492 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-cni-bin\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015556 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-var-lib-openvswitch\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015585 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-slash\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015624 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-log-socket\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015670 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-openvswitch\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015699 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-run-ovn-kubernetes\") pod \"76442e88-72e2-4a86-99b4-bd07f0490aa9\" (UID: \"76442e88-72e2-4a86-99b4-bd07f0490aa9\") " Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.015792 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.016061 4719 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.016074 4719 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.016094 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.016105 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.016132 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-node-log" (OuterVolumeSpecName: "node-log") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.017185 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.017201 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.017216 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.017240 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.017268 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.017292 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.017315 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-slash" (OuterVolumeSpecName: "host-slash") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.017340 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.017364 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-log-socket" (OuterVolumeSpecName: "log-socket") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.017388 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.017560 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.021502 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.021671 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76442e88-72e2-4a86-99b4-bd07f0490aa9-kube-api-access-f7xp6" (OuterVolumeSpecName: "kube-api-access-f7xp6") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "kube-api-access-f7xp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.032414 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "76442e88-72e2-4a86-99b4-bd07f0490aa9" (UID: "76442e88-72e2-4a86-99b4-bd07f0490aa9"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.033847 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovnkube-controller/3.log" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.035377 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovn-acl-logging/0.log" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.035837 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvqzq_76442e88-72e2-4a86-99b4-bd07f0490aa9/ovn-controller/0.log" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036144 4719 generic.go:334] "Generic (PLEG): container finished" podID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerID="6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4" exitCode=0 Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036167 4719 generic.go:334] "Generic (PLEG): container finished" podID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerID="82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d" exitCode=0 Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036175 4719 generic.go:334] "Generic (PLEG): container finished" podID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerID="e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec" exitCode=0 Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036182 4719 generic.go:334] "Generic (PLEG): container finished" podID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerID="a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d" exitCode=0 Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036188 4719 generic.go:334] "Generic (PLEG): container finished" podID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerID="982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e" exitCode=0 Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036193 4719 generic.go:334] "Generic (PLEG): container finished" podID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerID="83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6" exitCode=0 Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036200 4719 generic.go:334] "Generic (PLEG): container finished" podID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerID="4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd" exitCode=143 Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036206 4719 generic.go:334] "Generic (PLEG): container finished" podID="76442e88-72e2-4a86-99b4-bd07f0490aa9" containerID="38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4" exitCode=143 Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036238 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerDied","Data":"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036264 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerDied","Data":"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036274 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerDied","Data":"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036283 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerDied","Data":"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036292 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerDied","Data":"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036301 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerDied","Data":"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036310 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036321 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036326 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036331 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036338 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036344 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036349 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036354 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036360 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036367 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerDied","Data":"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036374 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036381 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036387 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036392 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036398 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036404 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036409 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036413 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036419 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036424 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036431 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerDied","Data":"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036438 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036443 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036448 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036454 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036459 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036464 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036469 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036473 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036478 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036483 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036490 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" event={"ID":"76442e88-72e2-4a86-99b4-bd07f0490aa9","Type":"ContainerDied","Data":"0ca030c9bc4e3269409339d8dc9218eb016fcb0bc34e23ccdb7db116c20d3eee"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036535 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036565 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036572 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036577 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036583 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036588 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036592 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036597 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036603 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036607 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036621 4719 scope.go:117] "RemoveContainer" containerID="6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.036760 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fvqzq" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.042573 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v8ghd_1e9122c9-57ef-4b8f-92a8-593533891255/kube-multus/1.log" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.043236 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v8ghd_1e9122c9-57ef-4b8f-92a8-593533891255/kube-multus/0.log" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.043287 4719 generic.go:334] "Generic (PLEG): container finished" podID="1e9122c9-57ef-4b8f-92a8-593533891255" containerID="6bfb8a0689605bb34e2409cb37e1feb999c406f1d39df1fae17d8839dd58e911" exitCode=2 Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.043406 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v8ghd" event={"ID":"1e9122c9-57ef-4b8f-92a8-593533891255","Type":"ContainerDied","Data":"6bfb8a0689605bb34e2409cb37e1feb999c406f1d39df1fae17d8839dd58e911"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.043432 4719 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452"} Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.043938 4719 scope.go:117] "RemoveContainer" containerID="6bfb8a0689605bb34e2409cb37e1feb999c406f1d39df1fae17d8839dd58e911" Nov 24 09:04:20 crc kubenswrapper[4719]: E1124 09:04:20.044142 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-v8ghd_openshift-multus(1e9122c9-57ef-4b8f-92a8-593533891255)\"" pod="openshift-multus/multus-v8ghd" podUID="1e9122c9-57ef-4b8f-92a8-593533891255" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.062843 4719 scope.go:117] "RemoveContainer" containerID="c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.085300 4719 scope.go:117] "RemoveContainer" containerID="82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.086563 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fvqzq"] Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.090285 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fvqzq"] Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.098412 4719 scope.go:117] "RemoveContainer" containerID="e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.117416 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.117474 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c5bf98b9-f8bc-49ca-92f4-b56237133059-ovnkube-config\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.117497 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-run-ovn-kubernetes\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.117524 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-run-netns\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.123972 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-cni-bin\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.124106 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c5bf98b9-f8bc-49ca-92f4-b56237133059-ovnkube-script-lib\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.124193 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-var-lib-openvswitch\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.124278 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-run-systemd\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.124312 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-run-ovn\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.124516 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-run-openvswitch\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.124603 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-systemd-units\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.124641 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-slash\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.124670 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-node-log\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.124802 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-kubelet\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.124835 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-log-socket\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.124860 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c5bf98b9-f8bc-49ca-92f4-b56237133059-env-overrides\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.124933 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwmwp\" (UniqueName: \"kubernetes.io/projected/c5bf98b9-f8bc-49ca-92f4-b56237133059-kube-api-access-nwmwp\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.124967 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-cni-netd\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.124989 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c5bf98b9-f8bc-49ca-92f4-b56237133059-ovn-node-metrics-cert\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125025 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-etc-openvswitch\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125257 4719 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125301 4719 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125320 4719 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/76442e88-72e2-4a86-99b4-bd07f0490aa9-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125339 4719 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125353 4719 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125364 4719 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125376 4719 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125387 4719 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125400 4719 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125412 4719 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-slash\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125424 4719 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-log-socket\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125445 4719 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125458 4719 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125470 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7xp6\" (UniqueName: \"kubernetes.io/projected/76442e88-72e2-4a86-99b4-bd07f0490aa9-kube-api-access-f7xp6\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125484 4719 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/76442e88-72e2-4a86-99b4-bd07f0490aa9-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125495 4719 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-node-log\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125521 4719 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.125533 4719 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76442e88-72e2-4a86-99b4-bd07f0490aa9-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.131364 4719 scope.go:117] "RemoveContainer" containerID="a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.145234 4719 scope.go:117] "RemoveContainer" containerID="982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.158099 4719 scope.go:117] "RemoveContainer" containerID="83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.171321 4719 scope.go:117] "RemoveContainer" containerID="4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.182689 4719 scope.go:117] "RemoveContainer" containerID="38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.205201 4719 scope.go:117] "RemoveContainer" containerID="b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.218204 4719 scope.go:117] "RemoveContainer" containerID="6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4" Nov 24 09:04:20 crc kubenswrapper[4719]: E1124 09:04:20.218651 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4\": container with ID starting with 6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4 not found: ID does not exist" containerID="6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.218693 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4"} err="failed to get container status \"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4\": rpc error: code = NotFound desc = could not find container \"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4\": container with ID starting with 6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.218727 4719 scope.go:117] "RemoveContainer" containerID="c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9" Nov 24 09:04:20 crc kubenswrapper[4719]: E1124 09:04:20.219143 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\": container with ID starting with c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9 not found: ID does not exist" containerID="c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.219167 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9"} err="failed to get container status \"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\": rpc error: code = NotFound desc = could not find container \"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\": container with ID starting with c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.219181 4719 scope.go:117] "RemoveContainer" containerID="82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d" Nov 24 09:04:20 crc kubenswrapper[4719]: E1124 09:04:20.219540 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\": container with ID starting with 82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d not found: ID does not exist" containerID="82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.219588 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d"} err="failed to get container status \"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\": rpc error: code = NotFound desc = could not find container \"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\": container with ID starting with 82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.219621 4719 scope.go:117] "RemoveContainer" containerID="e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec" Nov 24 09:04:20 crc kubenswrapper[4719]: E1124 09:04:20.219949 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\": container with ID starting with e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec not found: ID does not exist" containerID="e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.219977 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec"} err="failed to get container status \"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\": rpc error: code = NotFound desc = could not find container \"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\": container with ID starting with e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.219992 4719 scope.go:117] "RemoveContainer" containerID="a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d" Nov 24 09:04:20 crc kubenswrapper[4719]: E1124 09:04:20.220236 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\": container with ID starting with a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d not found: ID does not exist" containerID="a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.220266 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d"} err="failed to get container status \"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\": rpc error: code = NotFound desc = could not find container \"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\": container with ID starting with a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.220285 4719 scope.go:117] "RemoveContainer" containerID="982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e" Nov 24 09:04:20 crc kubenswrapper[4719]: E1124 09:04:20.220531 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\": container with ID starting with 982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e not found: ID does not exist" containerID="982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.220559 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e"} err="failed to get container status \"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\": rpc error: code = NotFound desc = could not find container \"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\": container with ID starting with 982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.220578 4719 scope.go:117] "RemoveContainer" containerID="83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6" Nov 24 09:04:20 crc kubenswrapper[4719]: E1124 09:04:20.220818 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\": container with ID starting with 83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6 not found: ID does not exist" containerID="83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.220846 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6"} err="failed to get container status \"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\": rpc error: code = NotFound desc = could not find container \"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\": container with ID starting with 83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.220864 4719 scope.go:117] "RemoveContainer" containerID="4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd" Nov 24 09:04:20 crc kubenswrapper[4719]: E1124 09:04:20.221116 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\": container with ID starting with 4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd not found: ID does not exist" containerID="4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.221140 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd"} err="failed to get container status \"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\": rpc error: code = NotFound desc = could not find container \"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\": container with ID starting with 4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.221156 4719 scope.go:117] "RemoveContainer" containerID="38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4" Nov 24 09:04:20 crc kubenswrapper[4719]: E1124 09:04:20.221378 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\": container with ID starting with 38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4 not found: ID does not exist" containerID="38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.221402 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4"} err="failed to get container status \"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\": rpc error: code = NotFound desc = could not find container \"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\": container with ID starting with 38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.221418 4719 scope.go:117] "RemoveContainer" containerID="b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1" Nov 24 09:04:20 crc kubenswrapper[4719]: E1124 09:04:20.221622 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\": container with ID starting with b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1 not found: ID does not exist" containerID="b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.221653 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1"} err="failed to get container status \"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\": rpc error: code = NotFound desc = could not find container \"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\": container with ID starting with b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.221677 4719 scope.go:117] "RemoveContainer" containerID="6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.221897 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4"} err="failed to get container status \"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4\": rpc error: code = NotFound desc = could not find container \"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4\": container with ID starting with 6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.221922 4719 scope.go:117] "RemoveContainer" containerID="c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.222153 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9"} err="failed to get container status \"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\": rpc error: code = NotFound desc = could not find container \"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\": container with ID starting with c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.222179 4719 scope.go:117] "RemoveContainer" containerID="82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.222419 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d"} err="failed to get container status \"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\": rpc error: code = NotFound desc = could not find container \"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\": container with ID starting with 82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.222441 4719 scope.go:117] "RemoveContainer" containerID="e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.222665 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec"} err="failed to get container status \"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\": rpc error: code = NotFound desc = could not find container \"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\": container with ID starting with e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.222689 4719 scope.go:117] "RemoveContainer" containerID="a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.222910 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d"} err="failed to get container status \"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\": rpc error: code = NotFound desc = could not find container \"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\": container with ID starting with a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.222934 4719 scope.go:117] "RemoveContainer" containerID="982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.223179 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e"} err="failed to get container status \"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\": rpc error: code = NotFound desc = could not find container \"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\": container with ID starting with 982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.223204 4719 scope.go:117] "RemoveContainer" containerID="83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.223459 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6"} err="failed to get container status \"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\": rpc error: code = NotFound desc = could not find container \"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\": container with ID starting with 83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.223485 4719 scope.go:117] "RemoveContainer" containerID="4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.223741 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd"} err="failed to get container status \"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\": rpc error: code = NotFound desc = could not find container \"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\": container with ID starting with 4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.223761 4719 scope.go:117] "RemoveContainer" containerID="38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.223989 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4"} err="failed to get container status \"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\": rpc error: code = NotFound desc = could not find container \"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\": container with ID starting with 38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.224014 4719 scope.go:117] "RemoveContainer" containerID="b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.224262 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1"} err="failed to get container status \"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\": rpc error: code = NotFound desc = could not find container \"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\": container with ID starting with b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.224280 4719 scope.go:117] "RemoveContainer" containerID="6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.224473 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4"} err="failed to get container status \"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4\": rpc error: code = NotFound desc = could not find container \"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4\": container with ID starting with 6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.224498 4719 scope.go:117] "RemoveContainer" containerID="c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.224732 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9"} err="failed to get container status \"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\": rpc error: code = NotFound desc = could not find container \"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\": container with ID starting with c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.224754 4719 scope.go:117] "RemoveContainer" containerID="82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.224930 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d"} err="failed to get container status \"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\": rpc error: code = NotFound desc = could not find container \"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\": container with ID starting with 82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.224949 4719 scope.go:117] "RemoveContainer" containerID="e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.225117 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec"} err="failed to get container status \"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\": rpc error: code = NotFound desc = could not find container \"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\": container with ID starting with e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.225133 4719 scope.go:117] "RemoveContainer" containerID="a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.225320 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d"} err="failed to get container status \"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\": rpc error: code = NotFound desc = could not find container \"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\": container with ID starting with a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.225340 4719 scope.go:117] "RemoveContainer" containerID="982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.225532 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e"} err="failed to get container status \"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\": rpc error: code = NotFound desc = could not find container \"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\": container with ID starting with 982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.225553 4719 scope.go:117] "RemoveContainer" containerID="83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.225779 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6"} err="failed to get container status \"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\": rpc error: code = NotFound desc = could not find container \"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\": container with ID starting with 83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.225796 4719 scope.go:117] "RemoveContainer" containerID="4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.225985 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd"} err="failed to get container status \"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\": rpc error: code = NotFound desc = could not find container \"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\": container with ID starting with 4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226009 4719 scope.go:117] "RemoveContainer" containerID="38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226084 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-systemd-units\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226126 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-slash\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226156 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-node-log\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226185 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-slash\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226158 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-systemd-units\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226190 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-kubelet\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226232 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4"} err="failed to get container status \"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\": rpc error: code = NotFound desc = could not find container \"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\": container with ID starting with 38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226250 4719 scope.go:117] "RemoveContainer" containerID="b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226252 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-kubelet\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226239 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-node-log\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226276 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-log-socket\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226314 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c5bf98b9-f8bc-49ca-92f4-b56237133059-env-overrides\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226320 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-log-socket\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226366 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwmwp\" (UniqueName: \"kubernetes.io/projected/c5bf98b9-f8bc-49ca-92f4-b56237133059-kube-api-access-nwmwp\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226387 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-cni-netd\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226410 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c5bf98b9-f8bc-49ca-92f4-b56237133059-ovn-node-metrics-cert\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226438 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-etc-openvswitch\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226467 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226472 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-cni-netd\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226473 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1"} err="failed to get container status \"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\": rpc error: code = NotFound desc = could not find container \"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\": container with ID starting with b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226493 4719 scope.go:117] "RemoveContainer" containerID="6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226501 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226506 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c5bf98b9-f8bc-49ca-92f4-b56237133059-ovnkube-config\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226525 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-run-ovn-kubernetes\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226550 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-run-netns\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226577 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-run-ovn-kubernetes\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226593 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-run-netns\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226603 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-cni-bin\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226585 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-host-cni-bin\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226632 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c5bf98b9-f8bc-49ca-92f4-b56237133059-ovnkube-script-lib\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226652 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-var-lib-openvswitch\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226544 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-etc-openvswitch\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226680 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-run-systemd\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226724 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-run-ovn\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226758 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-run-openvswitch\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226696 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-run-systemd\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226762 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-var-lib-openvswitch\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226829 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-run-openvswitch\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.226859 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c5bf98b9-f8bc-49ca-92f4-b56237133059-run-ovn\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.227070 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c5bf98b9-f8bc-49ca-92f4-b56237133059-env-overrides\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.227300 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4"} err="failed to get container status \"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4\": rpc error: code = NotFound desc = could not find container \"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4\": container with ID starting with 6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.227334 4719 scope.go:117] "RemoveContainer" containerID="c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.227436 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c5bf98b9-f8bc-49ca-92f4-b56237133059-ovnkube-script-lib\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.227448 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c5bf98b9-f8bc-49ca-92f4-b56237133059-ovnkube-config\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.227550 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9"} err="failed to get container status \"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\": rpc error: code = NotFound desc = could not find container \"c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9\": container with ID starting with c7444300b8ebac90c2898325acfd1a55c995ab2d9a2733733a5d1ff80d185ac9 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.227575 4719 scope.go:117] "RemoveContainer" containerID="82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.227761 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d"} err="failed to get container status \"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\": rpc error: code = NotFound desc = could not find container \"82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d\": container with ID starting with 82c1f5e5eca022edf1e5a7e47f7889d323790d0736caa18c944956ff03da7a6d not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.227818 4719 scope.go:117] "RemoveContainer" containerID="e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.228007 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec"} err="failed to get container status \"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\": rpc error: code = NotFound desc = could not find container \"e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec\": container with ID starting with e7cd676cb83a49172823b99db7a4782c325dd1d496a448b615969c62cefcadec not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.228046 4719 scope.go:117] "RemoveContainer" containerID="a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.228239 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d"} err="failed to get container status \"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\": rpc error: code = NotFound desc = could not find container \"a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d\": container with ID starting with a7e520615d5c938f22d0e877541ca1260b53cfb4e23e2363160449c96175fb8d not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.228262 4719 scope.go:117] "RemoveContainer" containerID="982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.228434 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e"} err="failed to get container status \"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\": rpc error: code = NotFound desc = could not find container \"982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e\": container with ID starting with 982077f6ec683a1d3587f6e1e6478d65b74bfc9110f39483f7fcf60eedba9e7e not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.228457 4719 scope.go:117] "RemoveContainer" containerID="83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.228607 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6"} err="failed to get container status \"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\": rpc error: code = NotFound desc = could not find container \"83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6\": container with ID starting with 83fc63256409e5c3f72492af6f2df40649fc1aee84d0106a306c98c0910ef1e6 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.228629 4719 scope.go:117] "RemoveContainer" containerID="4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.228781 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd"} err="failed to get container status \"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\": rpc error: code = NotFound desc = could not find container \"4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd\": container with ID starting with 4eb8040e3710b231cb21bcd33353d9304bfc00850f22d7c5bac458bb169fb0fd not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.228803 4719 scope.go:117] "RemoveContainer" containerID="38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.228977 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4"} err="failed to get container status \"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\": rpc error: code = NotFound desc = could not find container \"38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4\": container with ID starting with 38721d75b89309ab1febf4fd0227a898053045ec63d5720d0b7d1dbaab83e3c4 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.229000 4719 scope.go:117] "RemoveContainer" containerID="b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.229232 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1"} err="failed to get container status \"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\": rpc error: code = NotFound desc = could not find container \"b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1\": container with ID starting with b5456352b5d60f4df33460471c0484b8742a77dd28793481f047013faf7403f1 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.229255 4719 scope.go:117] "RemoveContainer" containerID="6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.229515 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4"} err="failed to get container status \"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4\": rpc error: code = NotFound desc = could not find container \"6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4\": container with ID starting with 6dde89a83c4ee8b26a9995ff562ec4a1317500212352df8765530f7a6eff5bb4 not found: ID does not exist" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.229905 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c5bf98b9-f8bc-49ca-92f4-b56237133059-ovn-node-metrics-cert\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.241835 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwmwp\" (UniqueName: \"kubernetes.io/projected/c5bf98b9-f8bc-49ca-92f4-b56237133059-kube-api-access-nwmwp\") pod \"ovnkube-node-9fzs7\" (UID: \"c5bf98b9-f8bc-49ca-92f4-b56237133059\") " pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.263224 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:20 crc kubenswrapper[4719]: I1124 09:04:20.527327 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76442e88-72e2-4a86-99b4-bd07f0490aa9" path="/var/lib/kubelet/pods/76442e88-72e2-4a86-99b4-bd07f0490aa9/volumes" Nov 24 09:04:21 crc kubenswrapper[4719]: I1124 09:04:21.069028 4719 generic.go:334] "Generic (PLEG): container finished" podID="c5bf98b9-f8bc-49ca-92f4-b56237133059" containerID="3a560ee876ac8d0dba177b9a2311d6cde2f10a420e8e83953f92a9039839ba9d" exitCode=0 Nov 24 09:04:21 crc kubenswrapper[4719]: I1124 09:04:21.069125 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" event={"ID":"c5bf98b9-f8bc-49ca-92f4-b56237133059","Type":"ContainerDied","Data":"3a560ee876ac8d0dba177b9a2311d6cde2f10a420e8e83953f92a9039839ba9d"} Nov 24 09:04:21 crc kubenswrapper[4719]: I1124 09:04:21.069158 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" event={"ID":"c5bf98b9-f8bc-49ca-92f4-b56237133059","Type":"ContainerStarted","Data":"bae94a07f0453ce25dbd1224f68237c9a897961b07c9e654cb451f59235ca966"} Nov 24 09:04:22 crc kubenswrapper[4719]: I1124 09:04:22.076733 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" event={"ID":"c5bf98b9-f8bc-49ca-92f4-b56237133059","Type":"ContainerStarted","Data":"1e7a37f5e8af4ef6bcc0b9ffdb35fd46f5734e6f7f26ce68c087ebe11ba93d01"} Nov 24 09:04:22 crc kubenswrapper[4719]: I1124 09:04:22.077111 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" event={"ID":"c5bf98b9-f8bc-49ca-92f4-b56237133059","Type":"ContainerStarted","Data":"972f7babfe7553cfb9b84d0abd075d1a6f6a0072c68909d61ce7028444a7dffc"} Nov 24 09:04:22 crc kubenswrapper[4719]: I1124 09:04:22.077125 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" event={"ID":"c5bf98b9-f8bc-49ca-92f4-b56237133059","Type":"ContainerStarted","Data":"6081ea940b97afabe723d89cf08f644e9ec86bc5fc20fb7f52b42f378453cc57"} Nov 24 09:04:22 crc kubenswrapper[4719]: I1124 09:04:22.077138 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" event={"ID":"c5bf98b9-f8bc-49ca-92f4-b56237133059","Type":"ContainerStarted","Data":"6cc1222fe4fb965885aefb129cd1c1c7cfbbc6e93411fed53399d72df5d185fa"} Nov 24 09:04:22 crc kubenswrapper[4719]: I1124 09:04:22.077149 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" event={"ID":"c5bf98b9-f8bc-49ca-92f4-b56237133059","Type":"ContainerStarted","Data":"648a50d5dff6fd31c41c389a43661c7f8e7fe64d49b78844a0305a7d5b29c427"} Nov 24 09:04:22 crc kubenswrapper[4719]: I1124 09:04:22.077162 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" event={"ID":"c5bf98b9-f8bc-49ca-92f4-b56237133059","Type":"ContainerStarted","Data":"3d94cf04a0e7255ab239b79d4401c1964a67c6aa2204375ab55f550c2ebf2690"} Nov 24 09:04:24 crc kubenswrapper[4719]: I1124 09:04:24.091470 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" event={"ID":"c5bf98b9-f8bc-49ca-92f4-b56237133059","Type":"ContainerStarted","Data":"3009a032b27469abb433129577dda26cd3db42239e6537699f7bfe8ea5060a2d"} Nov 24 09:04:27 crc kubenswrapper[4719]: I1124 09:04:27.110441 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" event={"ID":"c5bf98b9-f8bc-49ca-92f4-b56237133059","Type":"ContainerStarted","Data":"7c63a1c87bd593392a594182f642c4375b69a5e1cc76237225626282709b499f"} Nov 24 09:04:27 crc kubenswrapper[4719]: I1124 09:04:27.110759 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:27 crc kubenswrapper[4719]: I1124 09:04:27.110771 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:27 crc kubenswrapper[4719]: I1124 09:04:27.140107 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" podStartSLOduration=8.140089994 podStartE2EDuration="8.140089994s" podCreationTimestamp="2025-11-24 09:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:04:27.137442767 +0000 UTC m=+643.468716039" watchObservedRunningTime="2025-11-24 09:04:27.140089994 +0000 UTC m=+643.471363246" Nov 24 09:04:27 crc kubenswrapper[4719]: I1124 09:04:27.146206 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:28 crc kubenswrapper[4719]: I1124 09:04:28.115210 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:28 crc kubenswrapper[4719]: I1124 09:04:28.140744 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:04:34 crc kubenswrapper[4719]: I1124 09:04:34.522641 4719 scope.go:117] "RemoveContainer" containerID="6bfb8a0689605bb34e2409cb37e1feb999c406f1d39df1fae17d8839dd58e911" Nov 24 09:04:35 crc kubenswrapper[4719]: I1124 09:04:35.149371 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v8ghd_1e9122c9-57ef-4b8f-92a8-593533891255/kube-multus/1.log" Nov 24 09:04:35 crc kubenswrapper[4719]: I1124 09:04:35.150120 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v8ghd_1e9122c9-57ef-4b8f-92a8-593533891255/kube-multus/0.log" Nov 24 09:04:35 crc kubenswrapper[4719]: I1124 09:04:35.150186 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-v8ghd" event={"ID":"1e9122c9-57ef-4b8f-92a8-593533891255","Type":"ContainerStarted","Data":"53b21ff5ee940de55c6a98e67c987e8d9a07952275bbe426f92e94f46d7d3f0e"} Nov 24 09:04:44 crc kubenswrapper[4719]: I1124 09:04:44.805721 4719 scope.go:117] "RemoveContainer" containerID="89b86446f9cf6a02908f8d3d1d62754a78d6a31f7552f794299daf7e6fa20452" Nov 24 09:04:45 crc kubenswrapper[4719]: I1124 09:04:45.203542 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-v8ghd_1e9122c9-57ef-4b8f-92a8-593533891255/kube-multus/1.log" Nov 24 09:04:50 crc kubenswrapper[4719]: I1124 09:04:50.288940 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9fzs7" Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.045615 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9"] Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.048183 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.053213 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.064850 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9"] Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.164706 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmtvm\" (UniqueName: \"kubernetes.io/projected/8267c94c-41ea-4889-bd9f-398571d09747-kube-api-access-tmtvm\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9\" (UID: \"8267c94c-41ea-4889-bd9f-398571d09747\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.164829 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8267c94c-41ea-4889-bd9f-398571d09747-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9\" (UID: \"8267c94c-41ea-4889-bd9f-398571d09747\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.164865 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8267c94c-41ea-4889-bd9f-398571d09747-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9\" (UID: \"8267c94c-41ea-4889-bd9f-398571d09747\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.265480 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmtvm\" (UniqueName: \"kubernetes.io/projected/8267c94c-41ea-4889-bd9f-398571d09747-kube-api-access-tmtvm\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9\" (UID: \"8267c94c-41ea-4889-bd9f-398571d09747\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.265722 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8267c94c-41ea-4889-bd9f-398571d09747-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9\" (UID: \"8267c94c-41ea-4889-bd9f-398571d09747\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.265746 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8267c94c-41ea-4889-bd9f-398571d09747-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9\" (UID: \"8267c94c-41ea-4889-bd9f-398571d09747\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.266166 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8267c94c-41ea-4889-bd9f-398571d09747-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9\" (UID: \"8267c94c-41ea-4889-bd9f-398571d09747\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.266197 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8267c94c-41ea-4889-bd9f-398571d09747-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9\" (UID: \"8267c94c-41ea-4889-bd9f-398571d09747\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.304985 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmtvm\" (UniqueName: \"kubernetes.io/projected/8267c94c-41ea-4889-bd9f-398571d09747-kube-api-access-tmtvm\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9\" (UID: \"8267c94c-41ea-4889-bd9f-398571d09747\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.361363 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" Nov 24 09:05:02 crc kubenswrapper[4719]: I1124 09:05:02.766400 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9"] Nov 24 09:05:03 crc kubenswrapper[4719]: I1124 09:05:03.296954 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" event={"ID":"8267c94c-41ea-4889-bd9f-398571d09747","Type":"ContainerStarted","Data":"db8823a40d5578710f616c248ae06dd38c7bfe66b7bcd46627a84a277297c6ce"} Nov 24 09:05:04 crc kubenswrapper[4719]: I1124 09:05:04.302585 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" event={"ID":"8267c94c-41ea-4889-bd9f-398571d09747","Type":"ContainerStarted","Data":"14ac506e9aebc6cfe0ed14858cab65273a82eb0cd50102b90a8768861ed00521"} Nov 24 09:05:06 crc kubenswrapper[4719]: I1124 09:05:06.323344 4719 generic.go:334] "Generic (PLEG): container finished" podID="8267c94c-41ea-4889-bd9f-398571d09747" containerID="14ac506e9aebc6cfe0ed14858cab65273a82eb0cd50102b90a8768861ed00521" exitCode=0 Nov 24 09:05:06 crc kubenswrapper[4719]: I1124 09:05:06.323391 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" event={"ID":"8267c94c-41ea-4889-bd9f-398571d09747","Type":"ContainerDied","Data":"14ac506e9aebc6cfe0ed14858cab65273a82eb0cd50102b90a8768861ed00521"} Nov 24 09:05:08 crc kubenswrapper[4719]: I1124 09:05:08.334822 4719 generic.go:334] "Generic (PLEG): container finished" podID="8267c94c-41ea-4889-bd9f-398571d09747" containerID="e4a27bfa5bfa6082bdf51ff07bba45828ecbe108a7c0ef467a6c91e9a0c6ec35" exitCode=0 Nov 24 09:05:08 crc kubenswrapper[4719]: I1124 09:05:08.334870 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" event={"ID":"8267c94c-41ea-4889-bd9f-398571d09747","Type":"ContainerDied","Data":"e4a27bfa5bfa6082bdf51ff07bba45828ecbe108a7c0ef467a6c91e9a0c6ec35"} Nov 24 09:05:09 crc kubenswrapper[4719]: I1124 09:05:09.342458 4719 generic.go:334] "Generic (PLEG): container finished" podID="8267c94c-41ea-4889-bd9f-398571d09747" containerID="e0a5ec6a57ff1f2d1b60c2fba193b1d0a12994468b0e5671f89fd2b5a6cf13bc" exitCode=0 Nov 24 09:05:09 crc kubenswrapper[4719]: I1124 09:05:09.342507 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" event={"ID":"8267c94c-41ea-4889-bd9f-398571d09747","Type":"ContainerDied","Data":"e0a5ec6a57ff1f2d1b60c2fba193b1d0a12994468b0e5671f89fd2b5a6cf13bc"} Nov 24 09:05:10 crc kubenswrapper[4719]: I1124 09:05:10.565075 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" Nov 24 09:05:10 crc kubenswrapper[4719]: I1124 09:05:10.714614 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8267c94c-41ea-4889-bd9f-398571d09747-util\") pod \"8267c94c-41ea-4889-bd9f-398571d09747\" (UID: \"8267c94c-41ea-4889-bd9f-398571d09747\") " Nov 24 09:05:10 crc kubenswrapper[4719]: I1124 09:05:10.714691 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmtvm\" (UniqueName: \"kubernetes.io/projected/8267c94c-41ea-4889-bd9f-398571d09747-kube-api-access-tmtvm\") pod \"8267c94c-41ea-4889-bd9f-398571d09747\" (UID: \"8267c94c-41ea-4889-bd9f-398571d09747\") " Nov 24 09:05:10 crc kubenswrapper[4719]: I1124 09:05:10.714797 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8267c94c-41ea-4889-bd9f-398571d09747-bundle\") pod \"8267c94c-41ea-4889-bd9f-398571d09747\" (UID: \"8267c94c-41ea-4889-bd9f-398571d09747\") " Nov 24 09:05:10 crc kubenswrapper[4719]: I1124 09:05:10.715779 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8267c94c-41ea-4889-bd9f-398571d09747-bundle" (OuterVolumeSpecName: "bundle") pod "8267c94c-41ea-4889-bd9f-398571d09747" (UID: "8267c94c-41ea-4889-bd9f-398571d09747"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:05:10 crc kubenswrapper[4719]: I1124 09:05:10.721258 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8267c94c-41ea-4889-bd9f-398571d09747-kube-api-access-tmtvm" (OuterVolumeSpecName: "kube-api-access-tmtvm") pod "8267c94c-41ea-4889-bd9f-398571d09747" (UID: "8267c94c-41ea-4889-bd9f-398571d09747"). InnerVolumeSpecName "kube-api-access-tmtvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:05:10 crc kubenswrapper[4719]: I1124 09:05:10.728354 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8267c94c-41ea-4889-bd9f-398571d09747-util" (OuterVolumeSpecName: "util") pod "8267c94c-41ea-4889-bd9f-398571d09747" (UID: "8267c94c-41ea-4889-bd9f-398571d09747"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:05:10 crc kubenswrapper[4719]: I1124 09:05:10.816792 4719 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8267c94c-41ea-4889-bd9f-398571d09747-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:05:10 crc kubenswrapper[4719]: I1124 09:05:10.816825 4719 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8267c94c-41ea-4889-bd9f-398571d09747-util\") on node \"crc\" DevicePath \"\"" Nov 24 09:05:10 crc kubenswrapper[4719]: I1124 09:05:10.816835 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmtvm\" (UniqueName: \"kubernetes.io/projected/8267c94c-41ea-4889-bd9f-398571d09747-kube-api-access-tmtvm\") on node \"crc\" DevicePath \"\"" Nov 24 09:05:11 crc kubenswrapper[4719]: I1124 09:05:11.354177 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" event={"ID":"8267c94c-41ea-4889-bd9f-398571d09747","Type":"ContainerDied","Data":"db8823a40d5578710f616c248ae06dd38c7bfe66b7bcd46627a84a277297c6ce"} Nov 24 09:05:11 crc kubenswrapper[4719]: I1124 09:05:11.354223 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db8823a40d5578710f616c248ae06dd38c7bfe66b7bcd46627a84a277297c6ce" Nov 24 09:05:11 crc kubenswrapper[4719]: I1124 09:05:11.354258 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9" Nov 24 09:05:13 crc kubenswrapper[4719]: I1124 09:05:13.571742 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-2w459"] Nov 24 09:05:13 crc kubenswrapper[4719]: E1124 09:05:13.571996 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8267c94c-41ea-4889-bd9f-398571d09747" containerName="pull" Nov 24 09:05:13 crc kubenswrapper[4719]: I1124 09:05:13.572012 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="8267c94c-41ea-4889-bd9f-398571d09747" containerName="pull" Nov 24 09:05:13 crc kubenswrapper[4719]: E1124 09:05:13.572026 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8267c94c-41ea-4889-bd9f-398571d09747" containerName="util" Nov 24 09:05:13 crc kubenswrapper[4719]: I1124 09:05:13.572054 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="8267c94c-41ea-4889-bd9f-398571d09747" containerName="util" Nov 24 09:05:13 crc kubenswrapper[4719]: E1124 09:05:13.572073 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8267c94c-41ea-4889-bd9f-398571d09747" containerName="extract" Nov 24 09:05:13 crc kubenswrapper[4719]: I1124 09:05:13.572081 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="8267c94c-41ea-4889-bd9f-398571d09747" containerName="extract" Nov 24 09:05:13 crc kubenswrapper[4719]: I1124 09:05:13.572203 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="8267c94c-41ea-4889-bd9f-398571d09747" containerName="extract" Nov 24 09:05:13 crc kubenswrapper[4719]: I1124 09:05:13.572708 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-2w459" Nov 24 09:05:13 crc kubenswrapper[4719]: I1124 09:05:13.574861 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 24 09:05:13 crc kubenswrapper[4719]: I1124 09:05:13.574950 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 24 09:05:13 crc kubenswrapper[4719]: I1124 09:05:13.575297 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-zggfk" Nov 24 09:05:13 crc kubenswrapper[4719]: I1124 09:05:13.619093 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-2w459"] Nov 24 09:05:13 crc kubenswrapper[4719]: I1124 09:05:13.753595 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr2f6\" (UniqueName: \"kubernetes.io/projected/875211b7-4698-4cb8-b214-1665dd3a1a77-kube-api-access-vr2f6\") pod \"nmstate-operator-557fdffb88-2w459\" (UID: \"875211b7-4698-4cb8-b214-1665dd3a1a77\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-2w459" Nov 24 09:05:13 crc kubenswrapper[4719]: I1124 09:05:13.854495 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2f6\" (UniqueName: \"kubernetes.io/projected/875211b7-4698-4cb8-b214-1665dd3a1a77-kube-api-access-vr2f6\") pod \"nmstate-operator-557fdffb88-2w459\" (UID: \"875211b7-4698-4cb8-b214-1665dd3a1a77\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-2w459" Nov 24 09:05:13 crc kubenswrapper[4719]: I1124 09:05:13.874222 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr2f6\" (UniqueName: \"kubernetes.io/projected/875211b7-4698-4cb8-b214-1665dd3a1a77-kube-api-access-vr2f6\") pod \"nmstate-operator-557fdffb88-2w459\" (UID: \"875211b7-4698-4cb8-b214-1665dd3a1a77\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-2w459" Nov 24 09:05:13 crc kubenswrapper[4719]: I1124 09:05:13.887027 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-2w459" Nov 24 09:05:14 crc kubenswrapper[4719]: I1124 09:05:14.074055 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-2w459"] Nov 24 09:05:14 crc kubenswrapper[4719]: I1124 09:05:14.369446 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-2w459" event={"ID":"875211b7-4698-4cb8-b214-1665dd3a1a77","Type":"ContainerStarted","Data":"c657403c3267e629cfb2bfacec0946d14b97ae896ce2c7e16cb7dc6ea6dacd34"} Nov 24 09:05:17 crc kubenswrapper[4719]: I1124 09:05:17.384122 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-2w459" event={"ID":"875211b7-4698-4cb8-b214-1665dd3a1a77","Type":"ContainerStarted","Data":"0c22a82a08d4027aa077215b711d8617ea3fa4d4fc70aee5a269a7b2e50066b5"} Nov 24 09:05:17 crc kubenswrapper[4719]: I1124 09:05:17.402951 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-2w459" podStartSLOduration=2.217388751 podStartE2EDuration="4.402935997s" podCreationTimestamp="2025-11-24 09:05:13 +0000 UTC" firstStartedPulling="2025-11-24 09:05:14.094643137 +0000 UTC m=+690.425916389" lastFinishedPulling="2025-11-24 09:05:16.280190383 +0000 UTC m=+692.611463635" observedRunningTime="2025-11-24 09:05:17.3980937 +0000 UTC m=+693.729366952" watchObservedRunningTime="2025-11-24 09:05:17.402935997 +0000 UTC m=+693.734209249" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.549825 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-r5mnn"] Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.551171 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-r5mnn" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.554292 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-86b2b" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.563773 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn"] Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.564611 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn" Nov 24 09:05:22 crc kubenswrapper[4719]: W1124 09:05:22.569024 4719 reflector.go:561] object-"openshift-nmstate"/"openshift-nmstate-webhook": failed to list *v1.Secret: secrets "openshift-nmstate-webhook" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Nov 24 09:05:22 crc kubenswrapper[4719]: E1124 09:05:22.569510 4719 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-nmstate-webhook\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-nmstate-webhook\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.572633 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-r5mnn"] Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.580927 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-dd5zz"] Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.581772 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.652245 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn"] Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.665730 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6b698c0f-63ea-4883-8771-f8b53718d191-ovs-socket\") pod \"nmstate-handler-dd5zz\" (UID: \"6b698c0f-63ea-4883-8771-f8b53718d191\") " pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.665781 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7km8n\" (UniqueName: \"kubernetes.io/projected/a11d83d8-730f-4b57-bc95-e0506f69539d-kube-api-access-7km8n\") pod \"nmstate-webhook-6b89b748d8-bxtbn\" (UID: \"a11d83d8-730f-4b57-bc95-e0506f69539d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.665806 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6b698c0f-63ea-4883-8771-f8b53718d191-nmstate-lock\") pod \"nmstate-handler-dd5zz\" (UID: \"6b698c0f-63ea-4883-8771-f8b53718d191\") " pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.665857 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6b698c0f-63ea-4883-8771-f8b53718d191-dbus-socket\") pod \"nmstate-handler-dd5zz\" (UID: \"6b698c0f-63ea-4883-8771-f8b53718d191\") " pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.665945 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a11d83d8-730f-4b57-bc95-e0506f69539d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-bxtbn\" (UID: \"a11d83d8-730f-4b57-bc95-e0506f69539d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.665993 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z2hz\" (UniqueName: \"kubernetes.io/projected/6b698c0f-63ea-4883-8771-f8b53718d191-kube-api-access-6z2hz\") pod \"nmstate-handler-dd5zz\" (UID: \"6b698c0f-63ea-4883-8771-f8b53718d191\") " pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.666052 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2ml8\" (UniqueName: \"kubernetes.io/projected/e0130b51-d625-42b0-9f57-018da660dddd-kube-api-access-s2ml8\") pod \"nmstate-metrics-5dcf9c57c5-r5mnn\" (UID: \"e0130b51-d625-42b0-9f57-018da660dddd\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-r5mnn" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.727359 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk"] Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.727986 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.732624 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.732944 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-gksf6" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.733337 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.748285 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk"] Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.767061 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6b698c0f-63ea-4883-8771-f8b53718d191-dbus-socket\") pod \"nmstate-handler-dd5zz\" (UID: \"6b698c0f-63ea-4883-8771-f8b53718d191\") " pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.767120 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a11d83d8-730f-4b57-bc95-e0506f69539d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-bxtbn\" (UID: \"a11d83d8-730f-4b57-bc95-e0506f69539d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.767150 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2hz\" (UniqueName: \"kubernetes.io/projected/6b698c0f-63ea-4883-8771-f8b53718d191-kube-api-access-6z2hz\") pod \"nmstate-handler-dd5zz\" (UID: \"6b698c0f-63ea-4883-8771-f8b53718d191\") " pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.767177 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2ml8\" (UniqueName: \"kubernetes.io/projected/e0130b51-d625-42b0-9f57-018da660dddd-kube-api-access-s2ml8\") pod \"nmstate-metrics-5dcf9c57c5-r5mnn\" (UID: \"e0130b51-d625-42b0-9f57-018da660dddd\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-r5mnn" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.767220 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6b698c0f-63ea-4883-8771-f8b53718d191-ovs-socket\") pod \"nmstate-handler-dd5zz\" (UID: \"6b698c0f-63ea-4883-8771-f8b53718d191\") " pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.767245 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6b698c0f-63ea-4883-8771-f8b53718d191-nmstate-lock\") pod \"nmstate-handler-dd5zz\" (UID: \"6b698c0f-63ea-4883-8771-f8b53718d191\") " pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.767265 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7km8n\" (UniqueName: \"kubernetes.io/projected/a11d83d8-730f-4b57-bc95-e0506f69539d-kube-api-access-7km8n\") pod \"nmstate-webhook-6b89b748d8-bxtbn\" (UID: \"a11d83d8-730f-4b57-bc95-e0506f69539d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.767387 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6b698c0f-63ea-4883-8771-f8b53718d191-dbus-socket\") pod \"nmstate-handler-dd5zz\" (UID: \"6b698c0f-63ea-4883-8771-f8b53718d191\") " pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.767473 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6b698c0f-63ea-4883-8771-f8b53718d191-ovs-socket\") pod \"nmstate-handler-dd5zz\" (UID: \"6b698c0f-63ea-4883-8771-f8b53718d191\") " pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.767514 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6b698c0f-63ea-4883-8771-f8b53718d191-nmstate-lock\") pod \"nmstate-handler-dd5zz\" (UID: \"6b698c0f-63ea-4883-8771-f8b53718d191\") " pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.787219 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2hz\" (UniqueName: \"kubernetes.io/projected/6b698c0f-63ea-4883-8771-f8b53718d191-kube-api-access-6z2hz\") pod \"nmstate-handler-dd5zz\" (UID: \"6b698c0f-63ea-4883-8771-f8b53718d191\") " pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.791679 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2ml8\" (UniqueName: \"kubernetes.io/projected/e0130b51-d625-42b0-9f57-018da660dddd-kube-api-access-s2ml8\") pod \"nmstate-metrics-5dcf9c57c5-r5mnn\" (UID: \"e0130b51-d625-42b0-9f57-018da660dddd\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-r5mnn" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.791797 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7km8n\" (UniqueName: \"kubernetes.io/projected/a11d83d8-730f-4b57-bc95-e0506f69539d-kube-api-access-7km8n\") pod \"nmstate-webhook-6b89b748d8-bxtbn\" (UID: \"a11d83d8-730f-4b57-bc95-e0506f69539d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.868444 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjn5h\" (UniqueName: \"kubernetes.io/projected/789cda50-c0b4-40be-88a7-9af3409bc49c-kube-api-access-rjn5h\") pod \"nmstate-console-plugin-5874bd7bc5-4ssqk\" (UID: \"789cda50-c0b4-40be-88a7-9af3409bc49c\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.868523 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/789cda50-c0b4-40be-88a7-9af3409bc49c-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-4ssqk\" (UID: \"789cda50-c0b4-40be-88a7-9af3409bc49c\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.868570 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/789cda50-c0b4-40be-88a7-9af3409bc49c-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-4ssqk\" (UID: \"789cda50-c0b4-40be-88a7-9af3409bc49c\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.880140 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-r5mnn" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.904168 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:22 crc kubenswrapper[4719]: W1124 09:05:22.936678 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b698c0f_63ea_4883_8771_f8b53718d191.slice/crio-d05859a5d97e0ee74492f06941079117989f3b046ca8558b1da014df26677f3e WatchSource:0}: Error finding container d05859a5d97e0ee74492f06941079117989f3b046ca8558b1da014df26677f3e: Status 404 returned error can't find the container with id d05859a5d97e0ee74492f06941079117989f3b046ca8558b1da014df26677f3e Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.941407 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-55bd768597-cssz5"] Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.942527 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.958699 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55bd768597-cssz5"] Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.969287 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/789cda50-c0b4-40be-88a7-9af3409bc49c-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-4ssqk\" (UID: \"789cda50-c0b4-40be-88a7-9af3409bc49c\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.969391 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjn5h\" (UniqueName: \"kubernetes.io/projected/789cda50-c0b4-40be-88a7-9af3409bc49c-kube-api-access-rjn5h\") pod \"nmstate-console-plugin-5874bd7bc5-4ssqk\" (UID: \"789cda50-c0b4-40be-88a7-9af3409bc49c\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.969430 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/789cda50-c0b4-40be-88a7-9af3409bc49c-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-4ssqk\" (UID: \"789cda50-c0b4-40be-88a7-9af3409bc49c\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.970789 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/789cda50-c0b4-40be-88a7-9af3409bc49c-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-4ssqk\" (UID: \"789cda50-c0b4-40be-88a7-9af3409bc49c\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.996070 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjn5h\" (UniqueName: \"kubernetes.io/projected/789cda50-c0b4-40be-88a7-9af3409bc49c-kube-api-access-rjn5h\") pod \"nmstate-console-plugin-5874bd7bc5-4ssqk\" (UID: \"789cda50-c0b4-40be-88a7-9af3409bc49c\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk" Nov 24 09:05:22 crc kubenswrapper[4719]: I1124 09:05:22.996650 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/789cda50-c0b4-40be-88a7-9af3409bc49c-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-4ssqk\" (UID: \"789cda50-c0b4-40be-88a7-9af3409bc49c\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.049623 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.070544 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/561dfca1-86bf-4e1a-86fe-193061e13104-oauth-serving-cert\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.070614 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/561dfca1-86bf-4e1a-86fe-193061e13104-console-serving-cert\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.070631 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtrng\" (UniqueName: \"kubernetes.io/projected/561dfca1-86bf-4e1a-86fe-193061e13104-kube-api-access-gtrng\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.070673 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/561dfca1-86bf-4e1a-86fe-193061e13104-console-oauth-config\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.070692 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/561dfca1-86bf-4e1a-86fe-193061e13104-service-ca\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.070764 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/561dfca1-86bf-4e1a-86fe-193061e13104-trusted-ca-bundle\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.070801 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/561dfca1-86bf-4e1a-86fe-193061e13104-console-config\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.160266 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-r5mnn"] Nov 24 09:05:23 crc kubenswrapper[4719]: W1124 09:05:23.168334 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0130b51_d625_42b0_9f57_018da660dddd.slice/crio-8e0281d3442c0c1314f6270436581ce7eabf05a2e1bc97da74a32ee145181b2d WatchSource:0}: Error finding container 8e0281d3442c0c1314f6270436581ce7eabf05a2e1bc97da74a32ee145181b2d: Status 404 returned error can't find the container with id 8e0281d3442c0c1314f6270436581ce7eabf05a2e1bc97da74a32ee145181b2d Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.171411 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/561dfca1-86bf-4e1a-86fe-193061e13104-oauth-serving-cert\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.171457 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/561dfca1-86bf-4e1a-86fe-193061e13104-console-serving-cert\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.171483 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtrng\" (UniqueName: \"kubernetes.io/projected/561dfca1-86bf-4e1a-86fe-193061e13104-kube-api-access-gtrng\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.171539 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/561dfca1-86bf-4e1a-86fe-193061e13104-console-oauth-config\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.171565 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/561dfca1-86bf-4e1a-86fe-193061e13104-service-ca\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.171590 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/561dfca1-86bf-4e1a-86fe-193061e13104-console-config\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.171611 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/561dfca1-86bf-4e1a-86fe-193061e13104-trusted-ca-bundle\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.173391 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/561dfca1-86bf-4e1a-86fe-193061e13104-service-ca\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.173412 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/561dfca1-86bf-4e1a-86fe-193061e13104-oauth-serving-cert\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.174523 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/561dfca1-86bf-4e1a-86fe-193061e13104-trusted-ca-bundle\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.176633 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/561dfca1-86bf-4e1a-86fe-193061e13104-console-oauth-config\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.176698 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/561dfca1-86bf-4e1a-86fe-193061e13104-console-config\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.177889 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/561dfca1-86bf-4e1a-86fe-193061e13104-console-serving-cert\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.189583 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtrng\" (UniqueName: \"kubernetes.io/projected/561dfca1-86bf-4e1a-86fe-193061e13104-kube-api-access-gtrng\") pod \"console-55bd768597-cssz5\" (UID: \"561dfca1-86bf-4e1a-86fe-193061e13104\") " pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.265613 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk"] Nov 24 09:05:23 crc kubenswrapper[4719]: W1124 09:05:23.272586 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod789cda50_c0b4_40be_88a7_9af3409bc49c.slice/crio-0a8ede2195bac7741b7dffc132ae29018d6541243eb2f2ad36167303186d2cf8 WatchSource:0}: Error finding container 0a8ede2195bac7741b7dffc132ae29018d6541243eb2f2ad36167303186d2cf8: Status 404 returned error can't find the container with id 0a8ede2195bac7741b7dffc132ae29018d6541243eb2f2ad36167303186d2cf8 Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.275787 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.413721 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-r5mnn" event={"ID":"e0130b51-d625-42b0-9f57-018da660dddd","Type":"ContainerStarted","Data":"8e0281d3442c0c1314f6270436581ce7eabf05a2e1bc97da74a32ee145181b2d"} Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.414667 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk" event={"ID":"789cda50-c0b4-40be-88a7-9af3409bc49c","Type":"ContainerStarted","Data":"0a8ede2195bac7741b7dffc132ae29018d6541243eb2f2ad36167303186d2cf8"} Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.416941 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dd5zz" event={"ID":"6b698c0f-63ea-4883-8771-f8b53718d191","Type":"ContainerStarted","Data":"d05859a5d97e0ee74492f06941079117989f3b046ca8558b1da014df26677f3e"} Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.449924 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55bd768597-cssz5"] Nov 24 09:05:23 crc kubenswrapper[4719]: W1124 09:05:23.455232 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod561dfca1_86bf_4e1a_86fe_193061e13104.slice/crio-0926a272aa468c0e4180e66b26603034561a0f603acca56910ec4b5ab04ceb78 WatchSource:0}: Error finding container 0926a272aa468c0e4180e66b26603034561a0f603acca56910ec4b5ab04ceb78: Status 404 returned error can't find the container with id 0926a272aa468c0e4180e66b26603034561a0f603acca56910ec4b5ab04ceb78 Nov 24 09:05:23 crc kubenswrapper[4719]: E1124 09:05:23.767913 4719 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: failed to sync secret cache: timed out waiting for the condition Nov 24 09:05:23 crc kubenswrapper[4719]: E1124 09:05:23.767994 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a11d83d8-730f-4b57-bc95-e0506f69539d-tls-key-pair podName:a11d83d8-730f-4b57-bc95-e0506f69539d nodeName:}" failed. No retries permitted until 2025-11-24 09:05:24.267975754 +0000 UTC m=+700.599249006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/a11d83d8-730f-4b57-bc95-e0506f69539d-tls-key-pair") pod "nmstate-webhook-6b89b748d8-bxtbn" (UID: "a11d83d8-730f-4b57-bc95-e0506f69539d") : failed to sync secret cache: timed out waiting for the condition Nov 24 09:05:23 crc kubenswrapper[4719]: I1124 09:05:23.829846 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 24 09:05:24 crc kubenswrapper[4719]: I1124 09:05:24.287286 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a11d83d8-730f-4b57-bc95-e0506f69539d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-bxtbn\" (UID: \"a11d83d8-730f-4b57-bc95-e0506f69539d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn" Nov 24 09:05:24 crc kubenswrapper[4719]: I1124 09:05:24.292628 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a11d83d8-730f-4b57-bc95-e0506f69539d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-bxtbn\" (UID: \"a11d83d8-730f-4b57-bc95-e0506f69539d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn" Nov 24 09:05:24 crc kubenswrapper[4719]: I1124 09:05:24.388931 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn" Nov 24 09:05:24 crc kubenswrapper[4719]: I1124 09:05:24.425025 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55bd768597-cssz5" event={"ID":"561dfca1-86bf-4e1a-86fe-193061e13104","Type":"ContainerStarted","Data":"c8583d8e92445f22badd9c4bdc4f0aac5d24611869aaafb085cd6f1535c9de76"} Nov 24 09:05:24 crc kubenswrapper[4719]: I1124 09:05:24.426174 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55bd768597-cssz5" event={"ID":"561dfca1-86bf-4e1a-86fe-193061e13104","Type":"ContainerStarted","Data":"0926a272aa468c0e4180e66b26603034561a0f603acca56910ec4b5ab04ceb78"} Nov 24 09:05:24 crc kubenswrapper[4719]: I1124 09:05:24.451537 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-55bd768597-cssz5" podStartSLOduration=2.451521806 podStartE2EDuration="2.451521806s" podCreationTimestamp="2025-11-24 09:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:24.449055706 +0000 UTC m=+700.780328978" watchObservedRunningTime="2025-11-24 09:05:24.451521806 +0000 UTC m=+700.782795058" Nov 24 09:05:24 crc kubenswrapper[4719]: I1124 09:05:24.600095 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn"] Nov 24 09:05:25 crc kubenswrapper[4719]: I1124 09:05:25.434851 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn" event={"ID":"a11d83d8-730f-4b57-bc95-e0506f69539d","Type":"ContainerStarted","Data":"6982bf3ffad52fec269e9205971dae478d8351bb298e5a7cf9876e8348dbcab3"} Nov 24 09:05:26 crc kubenswrapper[4719]: I1124 09:05:26.440926 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-r5mnn" event={"ID":"e0130b51-d625-42b0-9f57-018da660dddd","Type":"ContainerStarted","Data":"1d56b72f737d664961ef12bb284c26b658411475e79b5a877c850c1a156fce07"} Nov 24 09:05:26 crc kubenswrapper[4719]: I1124 09:05:26.442043 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn" event={"ID":"a11d83d8-730f-4b57-bc95-e0506f69539d","Type":"ContainerStarted","Data":"055e2e8cc17b1a9282e59912821d4df6405359e286b27e2bee6179ed8f8bf818"} Nov 24 09:05:26 crc kubenswrapper[4719]: I1124 09:05:26.442871 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn" Nov 24 09:05:26 crc kubenswrapper[4719]: I1124 09:05:26.446189 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk" event={"ID":"789cda50-c0b4-40be-88a7-9af3409bc49c","Type":"ContainerStarted","Data":"b2fcd3759a2bce13658e1db7edda9ea86bb26ec029e70ea05e9607b6608c2389"} Nov 24 09:05:26 crc kubenswrapper[4719]: I1124 09:05:26.448474 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:26 crc kubenswrapper[4719]: I1124 09:05:26.471822 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn" podStartSLOduration=2.9227362919999997 podStartE2EDuration="4.471807305s" podCreationTimestamp="2025-11-24 09:05:22 +0000 UTC" firstStartedPulling="2025-11-24 09:05:24.610635569 +0000 UTC m=+700.941908821" lastFinishedPulling="2025-11-24 09:05:26.159706582 +0000 UTC m=+702.490979834" observedRunningTime="2025-11-24 09:05:26.461075242 +0000 UTC m=+702.792348504" watchObservedRunningTime="2025-11-24 09:05:26.471807305 +0000 UTC m=+702.803080557" Nov 24 09:05:26 crc kubenswrapper[4719]: I1124 09:05:26.490350 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-dd5zz" podStartSLOduration=1.280981242 podStartE2EDuration="4.490332348s" podCreationTimestamp="2025-11-24 09:05:22 +0000 UTC" firstStartedPulling="2025-11-24 09:05:22.95015926 +0000 UTC m=+699.281432512" lastFinishedPulling="2025-11-24 09:05:26.159510366 +0000 UTC m=+702.490783618" observedRunningTime="2025-11-24 09:05:26.484782621 +0000 UTC m=+702.816055883" watchObservedRunningTime="2025-11-24 09:05:26.490332348 +0000 UTC m=+702.821605610" Nov 24 09:05:26 crc kubenswrapper[4719]: I1124 09:05:26.509210 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4ssqk" podStartSLOduration=1.735165938 podStartE2EDuration="4.509186581s" podCreationTimestamp="2025-11-24 09:05:22 +0000 UTC" firstStartedPulling="2025-11-24 09:05:23.27489183 +0000 UTC m=+699.606165082" lastFinishedPulling="2025-11-24 09:05:26.048912473 +0000 UTC m=+702.380185725" observedRunningTime="2025-11-24 09:05:26.506478644 +0000 UTC m=+702.837751916" watchObservedRunningTime="2025-11-24 09:05:26.509186581 +0000 UTC m=+702.840459833" Nov 24 09:05:27 crc kubenswrapper[4719]: I1124 09:05:27.454474 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dd5zz" event={"ID":"6b698c0f-63ea-4883-8771-f8b53718d191","Type":"ContainerStarted","Data":"b6a3f7d68239fd624b0c633f0bb6c12db9df0471fee7fa24c86155a6ab263c5d"} Nov 24 09:05:30 crc kubenswrapper[4719]: I1124 09:05:30.470741 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-r5mnn" event={"ID":"e0130b51-d625-42b0-9f57-018da660dddd","Type":"ContainerStarted","Data":"1165e6d37de552790922e7c224ed6799d3261a0f5d2c732014651b95641e7a6a"} Nov 24 09:05:32 crc kubenswrapper[4719]: I1124 09:05:32.925748 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-dd5zz" Nov 24 09:05:32 crc kubenswrapper[4719]: I1124 09:05:32.943878 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-r5mnn" podStartSLOduration=4.095865549 podStartE2EDuration="10.943864383s" podCreationTimestamp="2025-11-24 09:05:22 +0000 UTC" firstStartedPulling="2025-11-24 09:05:23.170737339 +0000 UTC m=+699.502010591" lastFinishedPulling="2025-11-24 09:05:30.018736173 +0000 UTC m=+706.350009425" observedRunningTime="2025-11-24 09:05:30.490613228 +0000 UTC m=+706.821886500" watchObservedRunningTime="2025-11-24 09:05:32.943864383 +0000 UTC m=+709.275137635" Nov 24 09:05:33 crc kubenswrapper[4719]: I1124 09:05:33.276344 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:33 crc kubenswrapper[4719]: I1124 09:05:33.276860 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:33 crc kubenswrapper[4719]: I1124 09:05:33.287194 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:33 crc kubenswrapper[4719]: I1124 09:05:33.493628 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-55bd768597-cssz5" Nov 24 09:05:33 crc kubenswrapper[4719]: I1124 09:05:33.554418 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-l4lt5"] Nov 24 09:05:44 crc kubenswrapper[4719]: I1124 09:05:44.394680 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-bxtbn" Nov 24 09:05:47 crc kubenswrapper[4719]: I1124 09:05:47.417342 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hcbkk"] Nov 24 09:05:47 crc kubenswrapper[4719]: I1124 09:05:47.417841 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" podUID="5ed2027e-eab9-48cc-a501-e6ff6ce80e92" containerName="controller-manager" containerID="cri-o://1f460ecf9d691459441a17492aba54fd31c186df64101d40043e3cbcd01684e8" gracePeriod=30 Nov 24 09:05:47 crc kubenswrapper[4719]: I1124 09:05:47.522426 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q"] Nov 24 09:05:47 crc kubenswrapper[4719]: I1124 09:05:47.522923 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" podUID="cdf07083-6f82-49a7-9af9-b2d7aec76240" containerName="route-controller-manager" containerID="cri-o://2056c1f961d1d33c723858ff8f3a4a2e291dc7530c6f8b2a2d8713c3e80e963e" gracePeriod=30 Nov 24 09:05:47 crc kubenswrapper[4719]: I1124 09:05:47.579737 4719 generic.go:334] "Generic (PLEG): container finished" podID="5ed2027e-eab9-48cc-a501-e6ff6ce80e92" containerID="1f460ecf9d691459441a17492aba54fd31c186df64101d40043e3cbcd01684e8" exitCode=0 Nov 24 09:05:47 crc kubenswrapper[4719]: I1124 09:05:47.579786 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" event={"ID":"5ed2027e-eab9-48cc-a501-e6ff6ce80e92","Type":"ContainerDied","Data":"1f460ecf9d691459441a17492aba54fd31c186df64101d40043e3cbcd01684e8"} Nov 24 09:05:47 crc kubenswrapper[4719]: I1124 09:05:47.954796 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.132109 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-serving-cert\") pod \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.132177 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-config\") pod \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.132223 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-proxy-ca-bundles\") pod \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.132275 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-client-ca\") pod \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.132310 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d92wc\" (UniqueName: \"kubernetes.io/projected/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-kube-api-access-d92wc\") pod \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\" (UID: \"5ed2027e-eab9-48cc-a501-e6ff6ce80e92\") " Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.133693 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5ed2027e-eab9-48cc-a501-e6ff6ce80e92" (UID: "5ed2027e-eab9-48cc-a501-e6ff6ce80e92"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.133919 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-client-ca" (OuterVolumeSpecName: "client-ca") pod "5ed2027e-eab9-48cc-a501-e6ff6ce80e92" (UID: "5ed2027e-eab9-48cc-a501-e6ff6ce80e92"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.133994 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-config" (OuterVolumeSpecName: "config") pod "5ed2027e-eab9-48cc-a501-e6ff6ce80e92" (UID: "5ed2027e-eab9-48cc-a501-e6ff6ce80e92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.139787 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-kube-api-access-d92wc" (OuterVolumeSpecName: "kube-api-access-d92wc") pod "5ed2027e-eab9-48cc-a501-e6ff6ce80e92" (UID: "5ed2027e-eab9-48cc-a501-e6ff6ce80e92"). InnerVolumeSpecName "kube-api-access-d92wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.139993 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5ed2027e-eab9-48cc-a501-e6ff6ce80e92" (UID: "5ed2027e-eab9-48cc-a501-e6ff6ce80e92"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.234382 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.234461 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.234487 4719 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.234514 4719 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.234538 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d92wc\" (UniqueName: \"kubernetes.io/projected/5ed2027e-eab9-48cc-a501-e6ff6ce80e92-kube-api-access-d92wc\") on node \"crc\" DevicePath \"\"" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.380706 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.537307 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdf07083-6f82-49a7-9af9-b2d7aec76240-client-ca\") pod \"cdf07083-6f82-49a7-9af9-b2d7aec76240\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.537366 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf07083-6f82-49a7-9af9-b2d7aec76240-config\") pod \"cdf07083-6f82-49a7-9af9-b2d7aec76240\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.537412 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5f6z\" (UniqueName: \"kubernetes.io/projected/cdf07083-6f82-49a7-9af9-b2d7aec76240-kube-api-access-m5f6z\") pod \"cdf07083-6f82-49a7-9af9-b2d7aec76240\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.537511 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdf07083-6f82-49a7-9af9-b2d7aec76240-serving-cert\") pod \"cdf07083-6f82-49a7-9af9-b2d7aec76240\" (UID: \"cdf07083-6f82-49a7-9af9-b2d7aec76240\") " Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.539217 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdf07083-6f82-49a7-9af9-b2d7aec76240-config" (OuterVolumeSpecName: "config") pod "cdf07083-6f82-49a7-9af9-b2d7aec76240" (UID: "cdf07083-6f82-49a7-9af9-b2d7aec76240"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.539688 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdf07083-6f82-49a7-9af9-b2d7aec76240-client-ca" (OuterVolumeSpecName: "client-ca") pod "cdf07083-6f82-49a7-9af9-b2d7aec76240" (UID: "cdf07083-6f82-49a7-9af9-b2d7aec76240"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.542072 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdf07083-6f82-49a7-9af9-b2d7aec76240-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cdf07083-6f82-49a7-9af9-b2d7aec76240" (UID: "cdf07083-6f82-49a7-9af9-b2d7aec76240"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.543614 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdf07083-6f82-49a7-9af9-b2d7aec76240-kube-api-access-m5f6z" (OuterVolumeSpecName: "kube-api-access-m5f6z") pod "cdf07083-6f82-49a7-9af9-b2d7aec76240" (UID: "cdf07083-6f82-49a7-9af9-b2d7aec76240"). InnerVolumeSpecName "kube-api-access-m5f6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.585941 4719 generic.go:334] "Generic (PLEG): container finished" podID="cdf07083-6f82-49a7-9af9-b2d7aec76240" containerID="2056c1f961d1d33c723858ff8f3a4a2e291dc7530c6f8b2a2d8713c3e80e963e" exitCode=0 Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.585985 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.586016 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" event={"ID":"cdf07083-6f82-49a7-9af9-b2d7aec76240","Type":"ContainerDied","Data":"2056c1f961d1d33c723858ff8f3a4a2e291dc7530c6f8b2a2d8713c3e80e963e"} Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.586061 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q" event={"ID":"cdf07083-6f82-49a7-9af9-b2d7aec76240","Type":"ContainerDied","Data":"d6166b6f4774f422161aa803a066535d1aa583a1e480f05d5d6f76cc09099e1a"} Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.586082 4719 scope.go:117] "RemoveContainer" containerID="2056c1f961d1d33c723858ff8f3a4a2e291dc7530c6f8b2a2d8713c3e80e963e" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.590344 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" event={"ID":"5ed2027e-eab9-48cc-a501-e6ff6ce80e92","Type":"ContainerDied","Data":"3e59dc7e3e8d9a0e81f83f454f91c07a3a4091a4e115cb98c0341c1aef16ea5b"} Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.590379 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hcbkk" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.609659 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hcbkk"] Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.610399 4719 scope.go:117] "RemoveContainer" containerID="2056c1f961d1d33c723858ff8f3a4a2e291dc7530c6f8b2a2d8713c3e80e963e" Nov 24 09:05:48 crc kubenswrapper[4719]: E1124 09:05:48.610724 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2056c1f961d1d33c723858ff8f3a4a2e291dc7530c6f8b2a2d8713c3e80e963e\": container with ID starting with 2056c1f961d1d33c723858ff8f3a4a2e291dc7530c6f8b2a2d8713c3e80e963e not found: ID does not exist" containerID="2056c1f961d1d33c723858ff8f3a4a2e291dc7530c6f8b2a2d8713c3e80e963e" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.610745 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2056c1f961d1d33c723858ff8f3a4a2e291dc7530c6f8b2a2d8713c3e80e963e"} err="failed to get container status \"2056c1f961d1d33c723858ff8f3a4a2e291dc7530c6f8b2a2d8713c3e80e963e\": rpc error: code = NotFound desc = could not find container \"2056c1f961d1d33c723858ff8f3a4a2e291dc7530c6f8b2a2d8713c3e80e963e\": container with ID starting with 2056c1f961d1d33c723858ff8f3a4a2e291dc7530c6f8b2a2d8713c3e80e963e not found: ID does not exist" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.610762 4719 scope.go:117] "RemoveContainer" containerID="1f460ecf9d691459441a17492aba54fd31c186df64101d40043e3cbcd01684e8" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.613019 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hcbkk"] Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.624549 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q"] Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.628301 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-d5d8q"] Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.639254 4719 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cdf07083-6f82-49a7-9af9-b2d7aec76240-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.639280 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf07083-6f82-49a7-9af9-b2d7aec76240-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.639290 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5f6z\" (UniqueName: \"kubernetes.io/projected/cdf07083-6f82-49a7-9af9-b2d7aec76240-kube-api-access-m5f6z\") on node \"crc\" DevicePath \"\"" Nov 24 09:05:48 crc kubenswrapper[4719]: I1124 09:05:48.639299 4719 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cdf07083-6f82-49a7-9af9-b2d7aec76240-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.034511 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5"] Nov 24 09:05:49 crc kubenswrapper[4719]: E1124 09:05:49.034739 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdf07083-6f82-49a7-9af9-b2d7aec76240" containerName="route-controller-manager" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.034752 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdf07083-6f82-49a7-9af9-b2d7aec76240" containerName="route-controller-manager" Nov 24 09:05:49 crc kubenswrapper[4719]: E1124 09:05:49.034767 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed2027e-eab9-48cc-a501-e6ff6ce80e92" containerName="controller-manager" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.034773 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed2027e-eab9-48cc-a501-e6ff6ce80e92" containerName="controller-manager" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.034877 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed2027e-eab9-48cc-a501-e6ff6ce80e92" containerName="controller-manager" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.034897 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdf07083-6f82-49a7-9af9-b2d7aec76240" containerName="route-controller-manager" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.035311 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.037688 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.038005 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.038266 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.038444 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.038597 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.038735 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.077452 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5"] Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.143869 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c58f79a-ecfc-4785-ac4e-aad034718d64-serving-cert\") pod \"route-controller-manager-64fbcb9d69-xl7d5\" (UID: \"5c58f79a-ecfc-4785-ac4e-aad034718d64\") " pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.143963 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c58f79a-ecfc-4785-ac4e-aad034718d64-client-ca\") pod \"route-controller-manager-64fbcb9d69-xl7d5\" (UID: \"5c58f79a-ecfc-4785-ac4e-aad034718d64\") " pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.143981 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvwns\" (UniqueName: \"kubernetes.io/projected/5c58f79a-ecfc-4785-ac4e-aad034718d64-kube-api-access-wvwns\") pod \"route-controller-manager-64fbcb9d69-xl7d5\" (UID: \"5c58f79a-ecfc-4785-ac4e-aad034718d64\") " pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.144018 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c58f79a-ecfc-4785-ac4e-aad034718d64-config\") pod \"route-controller-manager-64fbcb9d69-xl7d5\" (UID: \"5c58f79a-ecfc-4785-ac4e-aad034718d64\") " pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.245321 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c58f79a-ecfc-4785-ac4e-aad034718d64-serving-cert\") pod \"route-controller-manager-64fbcb9d69-xl7d5\" (UID: \"5c58f79a-ecfc-4785-ac4e-aad034718d64\") " pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.245371 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvwns\" (UniqueName: \"kubernetes.io/projected/5c58f79a-ecfc-4785-ac4e-aad034718d64-kube-api-access-wvwns\") pod \"route-controller-manager-64fbcb9d69-xl7d5\" (UID: \"5c58f79a-ecfc-4785-ac4e-aad034718d64\") " pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.245390 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c58f79a-ecfc-4785-ac4e-aad034718d64-client-ca\") pod \"route-controller-manager-64fbcb9d69-xl7d5\" (UID: \"5c58f79a-ecfc-4785-ac4e-aad034718d64\") " pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.245422 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c58f79a-ecfc-4785-ac4e-aad034718d64-config\") pod \"route-controller-manager-64fbcb9d69-xl7d5\" (UID: \"5c58f79a-ecfc-4785-ac4e-aad034718d64\") " pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.246523 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c58f79a-ecfc-4785-ac4e-aad034718d64-config\") pod \"route-controller-manager-64fbcb9d69-xl7d5\" (UID: \"5c58f79a-ecfc-4785-ac4e-aad034718d64\") " pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.246532 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c58f79a-ecfc-4785-ac4e-aad034718d64-client-ca\") pod \"route-controller-manager-64fbcb9d69-xl7d5\" (UID: \"5c58f79a-ecfc-4785-ac4e-aad034718d64\") " pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.259170 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c58f79a-ecfc-4785-ac4e-aad034718d64-serving-cert\") pod \"route-controller-manager-64fbcb9d69-xl7d5\" (UID: \"5c58f79a-ecfc-4785-ac4e-aad034718d64\") " pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.261510 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvwns\" (UniqueName: \"kubernetes.io/projected/5c58f79a-ecfc-4785-ac4e-aad034718d64-kube-api-access-wvwns\") pod \"route-controller-manager-64fbcb9d69-xl7d5\" (UID: \"5c58f79a-ecfc-4785-ac4e-aad034718d64\") " pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.353116 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.376334 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bdbf79665-fgh79"] Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.377349 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.385544 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.385776 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.388491 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.388984 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.389783 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.389852 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.390106 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.403518 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bdbf79665-fgh79"] Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.551179 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c38f171-a400-44d5-ae51-fbb4fec3a45e-config\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.551286 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2k2x\" (UniqueName: \"kubernetes.io/projected/8c38f171-a400-44d5-ae51-fbb4fec3a45e-kube-api-access-t2k2x\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.551307 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c38f171-a400-44d5-ae51-fbb4fec3a45e-client-ca\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.551324 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c38f171-a400-44d5-ae51-fbb4fec3a45e-proxy-ca-bundles\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.551351 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c38f171-a400-44d5-ae51-fbb4fec3a45e-serving-cert\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.558766 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5"] Nov 24 09:05:49 crc kubenswrapper[4719]: W1124 09:05:49.564365 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c58f79a_ecfc_4785_ac4e_aad034718d64.slice/crio-31a257f58d586bc1fc690940365e667da001f952cb3b2e69907df59b1729413f WatchSource:0}: Error finding container 31a257f58d586bc1fc690940365e667da001f952cb3b2e69907df59b1729413f: Status 404 returned error can't find the container with id 31a257f58d586bc1fc690940365e667da001f952cb3b2e69907df59b1729413f Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.599205 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" event={"ID":"5c58f79a-ecfc-4785-ac4e-aad034718d64","Type":"ContainerStarted","Data":"31a257f58d586bc1fc690940365e667da001f952cb3b2e69907df59b1729413f"} Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.652520 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c38f171-a400-44d5-ae51-fbb4fec3a45e-serving-cert\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.652564 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c38f171-a400-44d5-ae51-fbb4fec3a45e-config\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.652627 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c38f171-a400-44d5-ae51-fbb4fec3a45e-client-ca\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.652643 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2k2x\" (UniqueName: \"kubernetes.io/projected/8c38f171-a400-44d5-ae51-fbb4fec3a45e-kube-api-access-t2k2x\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.652658 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c38f171-a400-44d5-ae51-fbb4fec3a45e-proxy-ca-bundles\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.653929 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c38f171-a400-44d5-ae51-fbb4fec3a45e-proxy-ca-bundles\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.654168 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c38f171-a400-44d5-ae51-fbb4fec3a45e-client-ca\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.654536 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c38f171-a400-44d5-ae51-fbb4fec3a45e-config\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.658395 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c38f171-a400-44d5-ae51-fbb4fec3a45e-serving-cert\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.670116 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2k2x\" (UniqueName: \"kubernetes.io/projected/8c38f171-a400-44d5-ae51-fbb4fec3a45e-kube-api-access-t2k2x\") pod \"controller-manager-7bdbf79665-fgh79\" (UID: \"8c38f171-a400-44d5-ae51-fbb4fec3a45e\") " pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:49 crc kubenswrapper[4719]: I1124 09:05:49.710635 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:50 crc kubenswrapper[4719]: I1124 09:05:50.125273 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bdbf79665-fgh79"] Nov 24 09:05:50 crc kubenswrapper[4719]: W1124 09:05:50.134971 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c38f171_a400_44d5_ae51_fbb4fec3a45e.slice/crio-a531001598af61b28b66660a0e7c9a4113627b978f7e97ebe0cded8746ed9229 WatchSource:0}: Error finding container a531001598af61b28b66660a0e7c9a4113627b978f7e97ebe0cded8746ed9229: Status 404 returned error can't find the container with id a531001598af61b28b66660a0e7c9a4113627b978f7e97ebe0cded8746ed9229 Nov 24 09:05:50 crc kubenswrapper[4719]: I1124 09:05:50.556063 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ed2027e-eab9-48cc-a501-e6ff6ce80e92" path="/var/lib/kubelet/pods/5ed2027e-eab9-48cc-a501-e6ff6ce80e92/volumes" Nov 24 09:05:50 crc kubenswrapper[4719]: I1124 09:05:50.556909 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdf07083-6f82-49a7-9af9-b2d7aec76240" path="/var/lib/kubelet/pods/cdf07083-6f82-49a7-9af9-b2d7aec76240/volumes" Nov 24 09:05:50 crc kubenswrapper[4719]: I1124 09:05:50.607444 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" event={"ID":"5c58f79a-ecfc-4785-ac4e-aad034718d64","Type":"ContainerStarted","Data":"22f8853012fb80f3ca9855d201a167beaef66f770bd64d9aaefdf54a571f971e"} Nov 24 09:05:50 crc kubenswrapper[4719]: I1124 09:05:50.608066 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:50 crc kubenswrapper[4719]: I1124 09:05:50.608910 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" event={"ID":"8c38f171-a400-44d5-ae51-fbb4fec3a45e","Type":"ContainerStarted","Data":"129545531efb2918d2c4a3e9d7cd57746cd6e132459f9a681f25926060bd9446"} Nov 24 09:05:50 crc kubenswrapper[4719]: I1124 09:05:50.608947 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" event={"ID":"8c38f171-a400-44d5-ae51-fbb4fec3a45e","Type":"ContainerStarted","Data":"a531001598af61b28b66660a0e7c9a4113627b978f7e97ebe0cded8746ed9229"} Nov 24 09:05:50 crc kubenswrapper[4719]: I1124 09:05:50.609577 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:50 crc kubenswrapper[4719]: I1124 09:05:50.615313 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" Nov 24 09:05:50 crc kubenswrapper[4719]: I1124 09:05:50.635805 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" podStartSLOduration=1.63578544 podStartE2EDuration="1.63578544s" podCreationTimestamp="2025-11-24 09:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:50.633962967 +0000 UTC m=+726.965236239" watchObservedRunningTime="2025-11-24 09:05:50.63578544 +0000 UTC m=+726.967058702" Nov 24 09:05:50 crc kubenswrapper[4719]: I1124 09:05:50.654490 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" Nov 24 09:05:50 crc kubenswrapper[4719]: I1124 09:05:50.659054 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bdbf79665-fgh79" podStartSLOduration=3.659027237 podStartE2EDuration="3.659027237s" podCreationTimestamp="2025-11-24 09:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:05:50.658989376 +0000 UTC m=+726.990262638" watchObservedRunningTime="2025-11-24 09:05:50.659027237 +0000 UTC m=+726.990300489" Nov 24 09:05:58 crc kubenswrapper[4719]: I1124 09:05:58.793923 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-l4lt5" podUID="0437d205-eb04-4136-a158-01d8729c335c" containerName="console" containerID="cri-o://a62b38dfa0bf45409b3e765d1b60c7c290d1a3dcb239f90ff71126c9c92dcb74" gracePeriod=15 Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.194143 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv"] Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.195773 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.197763 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.204230 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv"] Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.309609 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv\" (UID: \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.309689 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv\" (UID: \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.309735 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx66x\" (UniqueName: \"kubernetes.io/projected/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-kube-api-access-fx66x\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv\" (UID: \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.410421 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv\" (UID: \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.410718 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx66x\" (UniqueName: \"kubernetes.io/projected/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-kube-api-access-fx66x\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv\" (UID: \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.410876 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv\" (UID: \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.410939 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv\" (UID: \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.411252 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv\" (UID: \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.437414 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx66x\" (UniqueName: \"kubernetes.io/projected/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-kube-api-access-fx66x\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv\" (UID: \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.511290 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.805947 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-l4lt5_0437d205-eb04-4136-a158-01d8729c335c/console/0.log" Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.806185 4719 generic.go:334] "Generic (PLEG): container finished" podID="0437d205-eb04-4136-a158-01d8729c335c" containerID="a62b38dfa0bf45409b3e765d1b60c7c290d1a3dcb239f90ff71126c9c92dcb74" exitCode=2 Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.806217 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l4lt5" event={"ID":"0437d205-eb04-4136-a158-01d8729c335c","Type":"ContainerDied","Data":"a62b38dfa0bf45409b3e765d1b60c7c290d1a3dcb239f90ff71126c9c92dcb74"} Nov 24 09:05:59 crc kubenswrapper[4719]: I1124 09:05:59.929098 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv"] Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.056977 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-l4lt5_0437d205-eb04-4136-a158-01d8729c335c/console/0.log" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.057062 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.222273 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-oauth-serving-cert\") pod \"0437d205-eb04-4136-a158-01d8729c335c\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.222751 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-trusted-ca-bundle\") pod \"0437d205-eb04-4136-a158-01d8729c335c\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.222800 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-console-config\") pod \"0437d205-eb04-4136-a158-01d8729c335c\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.222836 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0437d205-eb04-4136-a158-01d8729c335c-console-oauth-config\") pod \"0437d205-eb04-4136-a158-01d8729c335c\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.222879 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0437d205-eb04-4136-a158-01d8729c335c-console-serving-cert\") pod \"0437d205-eb04-4136-a158-01d8729c335c\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.222918 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6q5z\" (UniqueName: \"kubernetes.io/projected/0437d205-eb04-4136-a158-01d8729c335c-kube-api-access-c6q5z\") pod \"0437d205-eb04-4136-a158-01d8729c335c\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.222949 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-service-ca\") pod \"0437d205-eb04-4136-a158-01d8729c335c\" (UID: \"0437d205-eb04-4136-a158-01d8729c335c\") " Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.224132 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-console-config" (OuterVolumeSpecName: "console-config") pod "0437d205-eb04-4136-a158-01d8729c335c" (UID: "0437d205-eb04-4136-a158-01d8729c335c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.224182 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0437d205-eb04-4136-a158-01d8729c335c" (UID: "0437d205-eb04-4136-a158-01d8729c335c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.224417 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-service-ca" (OuterVolumeSpecName: "service-ca") pod "0437d205-eb04-4136-a158-01d8729c335c" (UID: "0437d205-eb04-4136-a158-01d8729c335c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.224426 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0437d205-eb04-4136-a158-01d8729c335c" (UID: "0437d205-eb04-4136-a158-01d8729c335c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.228521 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0437d205-eb04-4136-a158-01d8729c335c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0437d205-eb04-4136-a158-01d8729c335c" (UID: "0437d205-eb04-4136-a158-01d8729c335c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.228797 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0437d205-eb04-4136-a158-01d8729c335c-kube-api-access-c6q5z" (OuterVolumeSpecName: "kube-api-access-c6q5z") pod "0437d205-eb04-4136-a158-01d8729c335c" (UID: "0437d205-eb04-4136-a158-01d8729c335c"). InnerVolumeSpecName "kube-api-access-c6q5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.231281 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0437d205-eb04-4136-a158-01d8729c335c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0437d205-eb04-4136-a158-01d8729c335c" (UID: "0437d205-eb04-4136-a158-01d8729c335c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.324770 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6q5z\" (UniqueName: \"kubernetes.io/projected/0437d205-eb04-4136-a158-01d8729c335c-kube-api-access-c6q5z\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.324802 4719 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.324812 4719 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.324820 4719 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.324828 4719 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0437d205-eb04-4136-a158-01d8729c335c-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.324835 4719 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0437d205-eb04-4136-a158-01d8729c335c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.324843 4719 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0437d205-eb04-4136-a158-01d8729c335c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.813349 4719 generic.go:334] "Generic (PLEG): container finished" podID="441dcc7a-e87d-4f62-a1e8-79ec5e961ce3" containerID="2d674c551b99a2266b6b4367a0dda7efffaaf0da138044923fd7a309a86c8a72" exitCode=0 Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.813408 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" event={"ID":"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3","Type":"ContainerDied","Data":"2d674c551b99a2266b6b4367a0dda7efffaaf0da138044923fd7a309a86c8a72"} Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.813434 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" event={"ID":"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3","Type":"ContainerStarted","Data":"5c3941db0821be25036280b77bd523a6276ecd6f4e3bfecd33da028ae799a420"} Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.814995 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-l4lt5_0437d205-eb04-4136-a158-01d8729c335c/console/0.log" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.815048 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-l4lt5" event={"ID":"0437d205-eb04-4136-a158-01d8729c335c","Type":"ContainerDied","Data":"93ddeda68f6e0ebcf7acb04a42be3bef2deab6d8683cd2b740346e11ccb18960"} Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.815076 4719 scope.go:117] "RemoveContainer" containerID="a62b38dfa0bf45409b3e765d1b60c7c290d1a3dcb239f90ff71126c9c92dcb74" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.815183 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-l4lt5" Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.852710 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-l4lt5"] Nov 24 09:06:00 crc kubenswrapper[4719]: I1124 09:06:00.855721 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-l4lt5"] Nov 24 09:06:01 crc kubenswrapper[4719]: I1124 09:06:01.537543 4719 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.529598 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0437d205-eb04-4136-a158-01d8729c335c" path="/var/lib/kubelet/pods/0437d205-eb04-4136-a158-01d8729c335c/volumes" Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.555587 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qvn8c"] Nov 24 09:06:02 crc kubenswrapper[4719]: E1124 09:06:02.555802 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0437d205-eb04-4136-a158-01d8729c335c" containerName="console" Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.555812 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="0437d205-eb04-4136-a158-01d8729c335c" containerName="console" Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.555909 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="0437d205-eb04-4136-a158-01d8729c335c" containerName="console" Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.556649 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.560501 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvn8c"] Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.755058 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f829430-db77-41fb-b857-3b892a07bdb6-utilities\") pod \"redhat-operators-qvn8c\" (UID: \"8f829430-db77-41fb-b857-3b892a07bdb6\") " pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.755141 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f829430-db77-41fb-b857-3b892a07bdb6-catalog-content\") pod \"redhat-operators-qvn8c\" (UID: \"8f829430-db77-41fb-b857-3b892a07bdb6\") " pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.755268 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz5bq\" (UniqueName: \"kubernetes.io/projected/8f829430-db77-41fb-b857-3b892a07bdb6-kube-api-access-lz5bq\") pod \"redhat-operators-qvn8c\" (UID: \"8f829430-db77-41fb-b857-3b892a07bdb6\") " pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.828747 4719 generic.go:334] "Generic (PLEG): container finished" podID="441dcc7a-e87d-4f62-a1e8-79ec5e961ce3" containerID="605d51f9da2a2765795e9c1c8a866bbee7d3aefc48b6bfc47f2375b1baf06cba" exitCode=0 Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.828787 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" event={"ID":"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3","Type":"ContainerDied","Data":"605d51f9da2a2765795e9c1c8a866bbee7d3aefc48b6bfc47f2375b1baf06cba"} Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.856437 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz5bq\" (UniqueName: \"kubernetes.io/projected/8f829430-db77-41fb-b857-3b892a07bdb6-kube-api-access-lz5bq\") pod \"redhat-operators-qvn8c\" (UID: \"8f829430-db77-41fb-b857-3b892a07bdb6\") " pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.856563 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f829430-db77-41fb-b857-3b892a07bdb6-utilities\") pod \"redhat-operators-qvn8c\" (UID: \"8f829430-db77-41fb-b857-3b892a07bdb6\") " pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.856592 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f829430-db77-41fb-b857-3b892a07bdb6-catalog-content\") pod \"redhat-operators-qvn8c\" (UID: \"8f829430-db77-41fb-b857-3b892a07bdb6\") " pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.857112 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f829430-db77-41fb-b857-3b892a07bdb6-catalog-content\") pod \"redhat-operators-qvn8c\" (UID: \"8f829430-db77-41fb-b857-3b892a07bdb6\") " pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.857182 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f829430-db77-41fb-b857-3b892a07bdb6-utilities\") pod \"redhat-operators-qvn8c\" (UID: \"8f829430-db77-41fb-b857-3b892a07bdb6\") " pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:02 crc kubenswrapper[4719]: I1124 09:06:02.878654 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz5bq\" (UniqueName: \"kubernetes.io/projected/8f829430-db77-41fb-b857-3b892a07bdb6-kube-api-access-lz5bq\") pod \"redhat-operators-qvn8c\" (UID: \"8f829430-db77-41fb-b857-3b892a07bdb6\") " pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:03 crc kubenswrapper[4719]: I1124 09:06:03.169023 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:03 crc kubenswrapper[4719]: I1124 09:06:03.607889 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvn8c"] Nov 24 09:06:03 crc kubenswrapper[4719]: I1124 09:06:03.834888 4719 generic.go:334] "Generic (PLEG): container finished" podID="8f829430-db77-41fb-b857-3b892a07bdb6" containerID="2a8ede4bb3ca3f473a9c0c2c6a08ab1d9e019e47d70f14e5f65227e69c7c4310" exitCode=0 Nov 24 09:06:03 crc kubenswrapper[4719]: I1124 09:06:03.835745 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvn8c" event={"ID":"8f829430-db77-41fb-b857-3b892a07bdb6","Type":"ContainerDied","Data":"2a8ede4bb3ca3f473a9c0c2c6a08ab1d9e019e47d70f14e5f65227e69c7c4310"} Nov 24 09:06:03 crc kubenswrapper[4719]: I1124 09:06:03.835775 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvn8c" event={"ID":"8f829430-db77-41fb-b857-3b892a07bdb6","Type":"ContainerStarted","Data":"a999b199a3825f2b4a655c98628471a1c09a8dab53d098442e0ed979d500d308"} Nov 24 09:06:03 crc kubenswrapper[4719]: I1124 09:06:03.838597 4719 generic.go:334] "Generic (PLEG): container finished" podID="441dcc7a-e87d-4f62-a1e8-79ec5e961ce3" containerID="8be8d156a8231ed1754696f205a4489d20e9eb80290f787918f2ed245d09001f" exitCode=0 Nov 24 09:06:03 crc kubenswrapper[4719]: I1124 09:06:03.838627 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" event={"ID":"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3","Type":"ContainerDied","Data":"8be8d156a8231ed1754696f205a4489d20e9eb80290f787918f2ed245d09001f"} Nov 24 09:06:04 crc kubenswrapper[4719]: I1124 09:06:04.562263 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:06:04 crc kubenswrapper[4719]: I1124 09:06:04.562606 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:06:04 crc kubenswrapper[4719]: I1124 09:06:04.844987 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvn8c" event={"ID":"8f829430-db77-41fb-b857-3b892a07bdb6","Type":"ContainerStarted","Data":"980391b1c0416f34c439baa822ad0cc3b9796a3c8d74f22d9199ada41ee287e1"} Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.198744 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.287237 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-util\") pod \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\" (UID: \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\") " Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.287318 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-bundle\") pod \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\" (UID: \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\") " Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.287357 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx66x\" (UniqueName: \"kubernetes.io/projected/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-kube-api-access-fx66x\") pod \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\" (UID: \"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3\") " Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.288478 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-bundle" (OuterVolumeSpecName: "bundle") pod "441dcc7a-e87d-4f62-a1e8-79ec5e961ce3" (UID: "441dcc7a-e87d-4f62-a1e8-79ec5e961ce3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.297340 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-kube-api-access-fx66x" (OuterVolumeSpecName: "kube-api-access-fx66x") pod "441dcc7a-e87d-4f62-a1e8-79ec5e961ce3" (UID: "441dcc7a-e87d-4f62-a1e8-79ec5e961ce3"). InnerVolumeSpecName "kube-api-access-fx66x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.302319 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-util" (OuterVolumeSpecName: "util") pod "441dcc7a-e87d-4f62-a1e8-79ec5e961ce3" (UID: "441dcc7a-e87d-4f62-a1e8-79ec5e961ce3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.388465 4719 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-util\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.388547 4719 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.388564 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx66x\" (UniqueName: \"kubernetes.io/projected/441dcc7a-e87d-4f62-a1e8-79ec5e961ce3-kube-api-access-fx66x\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.853782 4719 generic.go:334] "Generic (PLEG): container finished" podID="8f829430-db77-41fb-b857-3b892a07bdb6" containerID="980391b1c0416f34c439baa822ad0cc3b9796a3c8d74f22d9199ada41ee287e1" exitCode=0 Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.853849 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvn8c" event={"ID":"8f829430-db77-41fb-b857-3b892a07bdb6","Type":"ContainerDied","Data":"980391b1c0416f34c439baa822ad0cc3b9796a3c8d74f22d9199ada41ee287e1"} Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.857736 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" event={"ID":"441dcc7a-e87d-4f62-a1e8-79ec5e961ce3","Type":"ContainerDied","Data":"5c3941db0821be25036280b77bd523a6276ecd6f4e3bfecd33da028ae799a420"} Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.857775 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c3941db0821be25036280b77bd523a6276ecd6f4e3bfecd33da028ae799a420" Nov 24 09:06:05 crc kubenswrapper[4719]: I1124 09:06:05.857852 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv" Nov 24 09:06:06 crc kubenswrapper[4719]: I1124 09:06:06.867570 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvn8c" event={"ID":"8f829430-db77-41fb-b857-3b892a07bdb6","Type":"ContainerStarted","Data":"5b0dc8688272416780a0122040f452950fc0831fcaa4b1dfee71468d8e8bd9d3"} Nov 24 09:06:06 crc kubenswrapper[4719]: I1124 09:06:06.884202 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qvn8c" podStartSLOduration=2.473131179 podStartE2EDuration="4.884183489s" podCreationTimestamp="2025-11-24 09:06:02 +0000 UTC" firstStartedPulling="2025-11-24 09:06:03.836663133 +0000 UTC m=+740.167936385" lastFinishedPulling="2025-11-24 09:06:06.247715453 +0000 UTC m=+742.578988695" observedRunningTime="2025-11-24 09:06:06.883367075 +0000 UTC m=+743.214640327" watchObservedRunningTime="2025-11-24 09:06:06.884183489 +0000 UTC m=+743.215456741" Nov 24 09:06:13 crc kubenswrapper[4719]: I1124 09:06:13.169227 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:13 crc kubenswrapper[4719]: I1124 09:06:13.169533 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:13 crc kubenswrapper[4719]: I1124 09:06:13.213923 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:13 crc kubenswrapper[4719]: I1124 09:06:13.953541 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.282583 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps"] Nov 24 09:06:16 crc kubenswrapper[4719]: E1124 09:06:16.283708 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441dcc7a-e87d-4f62-a1e8-79ec5e961ce3" containerName="extract" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.283776 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="441dcc7a-e87d-4f62-a1e8-79ec5e961ce3" containerName="extract" Nov 24 09:06:16 crc kubenswrapper[4719]: E1124 09:06:16.283842 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441dcc7a-e87d-4f62-a1e8-79ec5e961ce3" containerName="pull" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.283899 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="441dcc7a-e87d-4f62-a1e8-79ec5e961ce3" containerName="pull" Nov 24 09:06:16 crc kubenswrapper[4719]: E1124 09:06:16.283949 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441dcc7a-e87d-4f62-a1e8-79ec5e961ce3" containerName="util" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.284002 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="441dcc7a-e87d-4f62-a1e8-79ec5e961ce3" containerName="util" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.284151 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="441dcc7a-e87d-4f62-a1e8-79ec5e961ce3" containerName="extract" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.284626 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.286550 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.286562 4719 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-thhjx" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.287193 4719 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.287676 4719 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.291725 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.306452 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps"] Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.427636 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fthk7\" (UniqueName: \"kubernetes.io/projected/053b9219-602e-4d52-af3d-a6e039be213e-kube-api-access-fthk7\") pod \"metallb-operator-controller-manager-c6ccddcb9-hhfps\" (UID: \"053b9219-602e-4d52-af3d-a6e039be213e\") " pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.427787 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/053b9219-602e-4d52-af3d-a6e039be213e-apiservice-cert\") pod \"metallb-operator-controller-manager-c6ccddcb9-hhfps\" (UID: \"053b9219-602e-4d52-af3d-a6e039be213e\") " pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.427887 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/053b9219-602e-4d52-af3d-a6e039be213e-webhook-cert\") pod \"metallb-operator-controller-manager-c6ccddcb9-hhfps\" (UID: \"053b9219-602e-4d52-af3d-a6e039be213e\") " pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.529546 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/053b9219-602e-4d52-af3d-a6e039be213e-webhook-cert\") pod \"metallb-operator-controller-manager-c6ccddcb9-hhfps\" (UID: \"053b9219-602e-4d52-af3d-a6e039be213e\") " pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.529630 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fthk7\" (UniqueName: \"kubernetes.io/projected/053b9219-602e-4d52-af3d-a6e039be213e-kube-api-access-fthk7\") pod \"metallb-operator-controller-manager-c6ccddcb9-hhfps\" (UID: \"053b9219-602e-4d52-af3d-a6e039be213e\") " pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.529687 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/053b9219-602e-4d52-af3d-a6e039be213e-apiservice-cert\") pod \"metallb-operator-controller-manager-c6ccddcb9-hhfps\" (UID: \"053b9219-602e-4d52-af3d-a6e039be213e\") " pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.537072 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/053b9219-602e-4d52-af3d-a6e039be213e-apiservice-cert\") pod \"metallb-operator-controller-manager-c6ccddcb9-hhfps\" (UID: \"053b9219-602e-4d52-af3d-a6e039be213e\") " pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.537461 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/053b9219-602e-4d52-af3d-a6e039be213e-webhook-cert\") pod \"metallb-operator-controller-manager-c6ccddcb9-hhfps\" (UID: \"053b9219-602e-4d52-af3d-a6e039be213e\") " pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.552895 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fthk7\" (UniqueName: \"kubernetes.io/projected/053b9219-602e-4d52-af3d-a6e039be213e-kube-api-access-fthk7\") pod \"metallb-operator-controller-manager-c6ccddcb9-hhfps\" (UID: \"053b9219-602e-4d52-af3d-a6e039be213e\") " pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.557560 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvn8c"] Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.557835 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qvn8c" podUID="8f829430-db77-41fb-b857-3b892a07bdb6" containerName="registry-server" containerID="cri-o://5b0dc8688272416780a0122040f452950fc0831fcaa4b1dfee71468d8e8bd9d3" gracePeriod=2 Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.561259 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-596c48c889-kksvs"] Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.561989 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.571683 4719 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.572119 4719 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.572304 4719 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-m4zjj" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.600628 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.649806 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-596c48c889-kksvs"] Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.737450 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr525\" (UniqueName: \"kubernetes.io/projected/fc753907-15ea-4768-8c53-e78830249c42-kube-api-access-hr525\") pod \"metallb-operator-webhook-server-596c48c889-kksvs\" (UID: \"fc753907-15ea-4768-8c53-e78830249c42\") " pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.737960 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc753907-15ea-4768-8c53-e78830249c42-apiservice-cert\") pod \"metallb-operator-webhook-server-596c48c889-kksvs\" (UID: \"fc753907-15ea-4768-8c53-e78830249c42\") " pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.738018 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc753907-15ea-4768-8c53-e78830249c42-webhook-cert\") pod \"metallb-operator-webhook-server-596c48c889-kksvs\" (UID: \"fc753907-15ea-4768-8c53-e78830249c42\") " pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.839303 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc753907-15ea-4768-8c53-e78830249c42-apiservice-cert\") pod \"metallb-operator-webhook-server-596c48c889-kksvs\" (UID: \"fc753907-15ea-4768-8c53-e78830249c42\") " pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.840234 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc753907-15ea-4768-8c53-e78830249c42-webhook-cert\") pod \"metallb-operator-webhook-server-596c48c889-kksvs\" (UID: \"fc753907-15ea-4768-8c53-e78830249c42\") " pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.840286 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr525\" (UniqueName: \"kubernetes.io/projected/fc753907-15ea-4768-8c53-e78830249c42-kube-api-access-hr525\") pod \"metallb-operator-webhook-server-596c48c889-kksvs\" (UID: \"fc753907-15ea-4768-8c53-e78830249c42\") " pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.844887 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc753907-15ea-4768-8c53-e78830249c42-apiservice-cert\") pod \"metallb-operator-webhook-server-596c48c889-kksvs\" (UID: \"fc753907-15ea-4768-8c53-e78830249c42\") " pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.845393 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc753907-15ea-4768-8c53-e78830249c42-webhook-cert\") pod \"metallb-operator-webhook-server-596c48c889-kksvs\" (UID: \"fc753907-15ea-4768-8c53-e78830249c42\") " pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.874256 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr525\" (UniqueName: \"kubernetes.io/projected/fc753907-15ea-4768-8c53-e78830249c42-kube-api-access-hr525\") pod \"metallb-operator-webhook-server-596c48c889-kksvs\" (UID: \"fc753907-15ea-4768-8c53-e78830249c42\") " pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.899964 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.934679 4719 generic.go:334] "Generic (PLEG): container finished" podID="8f829430-db77-41fb-b857-3b892a07bdb6" containerID="5b0dc8688272416780a0122040f452950fc0831fcaa4b1dfee71468d8e8bd9d3" exitCode=0 Nov 24 09:06:16 crc kubenswrapper[4719]: I1124 09:06:16.934729 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvn8c" event={"ID":"8f829430-db77-41fb-b857-3b892a07bdb6","Type":"ContainerDied","Data":"5b0dc8688272416780a0122040f452950fc0831fcaa4b1dfee71468d8e8bd9d3"} Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.137338 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps"] Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.448075 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-596c48c889-kksvs"] Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.619457 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.759642 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz5bq\" (UniqueName: \"kubernetes.io/projected/8f829430-db77-41fb-b857-3b892a07bdb6-kube-api-access-lz5bq\") pod \"8f829430-db77-41fb-b857-3b892a07bdb6\" (UID: \"8f829430-db77-41fb-b857-3b892a07bdb6\") " Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.759717 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f829430-db77-41fb-b857-3b892a07bdb6-catalog-content\") pod \"8f829430-db77-41fb-b857-3b892a07bdb6\" (UID: \"8f829430-db77-41fb-b857-3b892a07bdb6\") " Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.759757 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f829430-db77-41fb-b857-3b892a07bdb6-utilities\") pod \"8f829430-db77-41fb-b857-3b892a07bdb6\" (UID: \"8f829430-db77-41fb-b857-3b892a07bdb6\") " Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.760873 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f829430-db77-41fb-b857-3b892a07bdb6-utilities" (OuterVolumeSpecName: "utilities") pod "8f829430-db77-41fb-b857-3b892a07bdb6" (UID: "8f829430-db77-41fb-b857-3b892a07bdb6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.774704 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f829430-db77-41fb-b857-3b892a07bdb6-kube-api-access-lz5bq" (OuterVolumeSpecName: "kube-api-access-lz5bq") pod "8f829430-db77-41fb-b857-3b892a07bdb6" (UID: "8f829430-db77-41fb-b857-3b892a07bdb6"). InnerVolumeSpecName "kube-api-access-lz5bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.858137 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f829430-db77-41fb-b857-3b892a07bdb6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f829430-db77-41fb-b857-3b892a07bdb6" (UID: "8f829430-db77-41fb-b857-3b892a07bdb6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.861689 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz5bq\" (UniqueName: \"kubernetes.io/projected/8f829430-db77-41fb-b857-3b892a07bdb6-kube-api-access-lz5bq\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.861740 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f829430-db77-41fb-b857-3b892a07bdb6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.861753 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f829430-db77-41fb-b857-3b892a07bdb6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.941108 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" event={"ID":"fc753907-15ea-4768-8c53-e78830249c42","Type":"ContainerStarted","Data":"bc038ec547779504443841a592df6aab79d06bcb03cd7248dfd7bb3d3923dade"} Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.946047 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvn8c" event={"ID":"8f829430-db77-41fb-b857-3b892a07bdb6","Type":"ContainerDied","Data":"a999b199a3825f2b4a655c98628471a1c09a8dab53d098442e0ed979d500d308"} Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.946130 4719 scope.go:117] "RemoveContainer" containerID="5b0dc8688272416780a0122040f452950fc0831fcaa4b1dfee71468d8e8bd9d3" Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.946284 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvn8c" Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.958717 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" event={"ID":"053b9219-602e-4d52-af3d-a6e039be213e","Type":"ContainerStarted","Data":"5dbda8739f985b8d10d73a64764ffe103fbc32cbe495229eaa09bb593bcf62f9"} Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.977298 4719 scope.go:117] "RemoveContainer" containerID="980391b1c0416f34c439baa822ad0cc3b9796a3c8d74f22d9199ada41ee287e1" Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.983944 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvn8c"] Nov 24 09:06:17 crc kubenswrapper[4719]: I1124 09:06:17.987798 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qvn8c"] Nov 24 09:06:18 crc kubenswrapper[4719]: I1124 09:06:18.004944 4719 scope.go:117] "RemoveContainer" containerID="2a8ede4bb3ca3f473a9c0c2c6a08ab1d9e019e47d70f14e5f65227e69c7c4310" Nov 24 09:06:18 crc kubenswrapper[4719]: I1124 09:06:18.529208 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f829430-db77-41fb-b857-3b892a07bdb6" path="/var/lib/kubelet/pods/8f829430-db77-41fb-b857-3b892a07bdb6/volumes" Nov 24 09:06:22 crc kubenswrapper[4719]: I1124 09:06:22.023439 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" event={"ID":"053b9219-602e-4d52-af3d-a6e039be213e","Type":"ContainerStarted","Data":"15094d69e0f7c321ddbb13f9440ca47b9701cdfdb142a551eb026fde5e251c9a"} Nov 24 09:06:22 crc kubenswrapper[4719]: I1124 09:06:22.023898 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" Nov 24 09:06:22 crc kubenswrapper[4719]: I1124 09:06:22.054581 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" podStartSLOduration=1.8509480759999999 podStartE2EDuration="6.054565121s" podCreationTimestamp="2025-11-24 09:06:16 +0000 UTC" firstStartedPulling="2025-11-24 09:06:17.149281276 +0000 UTC m=+753.480554528" lastFinishedPulling="2025-11-24 09:06:21.352898321 +0000 UTC m=+757.684171573" observedRunningTime="2025-11-24 09:06:22.053794019 +0000 UTC m=+758.385067291" watchObservedRunningTime="2025-11-24 09:06:22.054565121 +0000 UTC m=+758.385838373" Nov 24 09:06:24 crc kubenswrapper[4719]: I1124 09:06:24.051710 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" event={"ID":"fc753907-15ea-4768-8c53-e78830249c42","Type":"ContainerStarted","Data":"b75b7baeb3f598eb80744b7ec87801cfadc2ea6d7ee1a7eec68903943c00fe02"} Nov 24 09:06:24 crc kubenswrapper[4719]: I1124 09:06:24.052096 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" Nov 24 09:06:34 crc kubenswrapper[4719]: I1124 09:06:34.561670 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:06:34 crc kubenswrapper[4719]: I1124 09:06:34.562242 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:06:36 crc kubenswrapper[4719]: I1124 09:06:36.905962 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" Nov 24 09:06:36 crc kubenswrapper[4719]: I1124 09:06:36.935630 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-596c48c889-kksvs" podStartSLOduration=15.123290691 podStartE2EDuration="20.935606425s" podCreationTimestamp="2025-11-24 09:06:16 +0000 UTC" firstStartedPulling="2025-11-24 09:06:17.470926902 +0000 UTC m=+753.802200154" lastFinishedPulling="2025-11-24 09:06:23.283242636 +0000 UTC m=+759.614515888" observedRunningTime="2025-11-24 09:06:24.106532219 +0000 UTC m=+760.437805481" watchObservedRunningTime="2025-11-24 09:06:36.935606425 +0000 UTC m=+773.266879687" Nov 24 09:06:56 crc kubenswrapper[4719]: I1124 09:06:56.603306 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-c6ccddcb9-hhfps" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.297690 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-t9glv"] Nov 24 09:06:57 crc kubenswrapper[4719]: E1124 09:06:57.298327 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f829430-db77-41fb-b857-3b892a07bdb6" containerName="registry-server" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.298354 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f829430-db77-41fb-b857-3b892a07bdb6" containerName="registry-server" Nov 24 09:06:57 crc kubenswrapper[4719]: E1124 09:06:57.298365 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f829430-db77-41fb-b857-3b892a07bdb6" containerName="extract-content" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.298373 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f829430-db77-41fb-b857-3b892a07bdb6" containerName="extract-content" Nov 24 09:06:57 crc kubenswrapper[4719]: E1124 09:06:57.298388 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f829430-db77-41fb-b857-3b892a07bdb6" containerName="extract-utilities" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.298396 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f829430-db77-41fb-b857-3b892a07bdb6" containerName="extract-utilities" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.298526 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f829430-db77-41fb-b857-3b892a07bdb6" containerName="registry-server" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.300882 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.305217 4719 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.305367 4719 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-n9rf7" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.305619 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.317621 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-s55w7"] Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.318368 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-s55w7" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.320081 4719 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.321814 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5dddbe20-c847-452a-ae82-5c12dc74d379-metrics\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.321880 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5dddbe20-c847-452a-ae82-5c12dc74d379-frr-conf\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.321907 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5dddbe20-c847-452a-ae82-5c12dc74d379-frr-startup\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.322047 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5dddbe20-c847-452a-ae82-5c12dc74d379-frr-sockets\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.322095 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5dddbe20-c847-452a-ae82-5c12dc74d379-reloader\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.322131 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5dddbe20-c847-452a-ae82-5c12dc74d379-metrics-certs\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.322159 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfgwh\" (UniqueName: \"kubernetes.io/projected/5dddbe20-c847-452a-ae82-5c12dc74d379-kube-api-access-gfgwh\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.340913 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-s55w7"] Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.415182 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6lqkr"] Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.416287 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6lqkr" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.419633 4719 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.419828 4719 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-kdmnm" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.419914 4719 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.420018 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.423656 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5dddbe20-c847-452a-ae82-5c12dc74d379-metrics-certs\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.423691 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfgwh\" (UniqueName: \"kubernetes.io/projected/5dddbe20-c847-452a-ae82-5c12dc74d379-kube-api-access-gfgwh\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.423714 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5dddbe20-c847-452a-ae82-5c12dc74d379-metrics\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.423761 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5dddbe20-c847-452a-ae82-5c12dc74d379-frr-conf\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.423785 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzwjl\" (UniqueName: \"kubernetes.io/projected/c3fe3e56-b4b2-48c9-9b95-5aa984326faa-kube-api-access-gzwjl\") pod \"frr-k8s-webhook-server-6998585d5-s55w7\" (UID: \"c3fe3e56-b4b2-48c9-9b95-5aa984326faa\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-s55w7" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.423804 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5dddbe20-c847-452a-ae82-5c12dc74d379-frr-startup\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.423817 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3fe3e56-b4b2-48c9-9b95-5aa984326faa-cert\") pod \"frr-k8s-webhook-server-6998585d5-s55w7\" (UID: \"c3fe3e56-b4b2-48c9-9b95-5aa984326faa\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-s55w7" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.423840 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5dddbe20-c847-452a-ae82-5c12dc74d379-frr-sockets\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: E1124 09:06:57.423842 4719 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.423863 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5dddbe20-c847-452a-ae82-5c12dc74d379-reloader\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: E1124 09:06:57.423920 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dddbe20-c847-452a-ae82-5c12dc74d379-metrics-certs podName:5dddbe20-c847-452a-ae82-5c12dc74d379 nodeName:}" failed. No retries permitted until 2025-11-24 09:06:57.923899383 +0000 UTC m=+794.255172715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5dddbe20-c847-452a-ae82-5c12dc74d379-metrics-certs") pod "frr-k8s-t9glv" (UID: "5dddbe20-c847-452a-ae82-5c12dc74d379") : secret "frr-k8s-certs-secret" not found Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.424321 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5dddbe20-c847-452a-ae82-5c12dc74d379-reloader\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.424535 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5dddbe20-c847-452a-ae82-5c12dc74d379-frr-sockets\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.424890 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5dddbe20-c847-452a-ae82-5c12dc74d379-metrics\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.425224 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5dddbe20-c847-452a-ae82-5c12dc74d379-frr-conf\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.425536 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5dddbe20-c847-452a-ae82-5c12dc74d379-frr-startup\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.442608 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-2d8hg"] Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.443670 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-2d8hg" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.445680 4719 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.471028 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfgwh\" (UniqueName: \"kubernetes.io/projected/5dddbe20-c847-452a-ae82-5c12dc74d379-kube-api-access-gfgwh\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.479974 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-2d8hg"] Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.524993 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3fe3e56-b4b2-48c9-9b95-5aa984326faa-cert\") pod \"frr-k8s-webhook-server-6998585d5-s55w7\" (UID: \"c3fe3e56-b4b2-48c9-9b95-5aa984326faa\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-s55w7" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.525299 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89bc3754-b51b-44ed-9c94-5d7f074446e2-cert\") pod \"controller-6c7b4b5f48-2d8hg\" (UID: \"89bc3754-b51b-44ed-9c94-5d7f074446e2\") " pod="metallb-system/controller-6c7b4b5f48-2d8hg" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.525427 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-metrics-certs\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.525523 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-memberlist\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:57 crc kubenswrapper[4719]: E1124 09:06:57.525191 4719 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.525821 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffdfs\" (UniqueName: \"kubernetes.io/projected/89bc3754-b51b-44ed-9c94-5d7f074446e2-kube-api-access-ffdfs\") pod \"controller-6c7b4b5f48-2d8hg\" (UID: \"89bc3754-b51b-44ed-9c94-5d7f074446e2\") " pod="metallb-system/controller-6c7b4b5f48-2d8hg" Nov 24 09:06:57 crc kubenswrapper[4719]: E1124 09:06:57.525868 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3fe3e56-b4b2-48c9-9b95-5aa984326faa-cert podName:c3fe3e56-b4b2-48c9-9b95-5aa984326faa nodeName:}" failed. No retries permitted until 2025-11-24 09:06:58.025845861 +0000 UTC m=+794.357119113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c3fe3e56-b4b2-48c9-9b95-5aa984326faa-cert") pod "frr-k8s-webhook-server-6998585d5-s55w7" (UID: "c3fe3e56-b4b2-48c9-9b95-5aa984326faa") : secret "frr-k8s-webhook-server-cert" not found Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.526065 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-metallb-excludel2\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.526236 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnsrz\" (UniqueName: \"kubernetes.io/projected/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-kube-api-access-jnsrz\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.526343 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89bc3754-b51b-44ed-9c94-5d7f074446e2-metrics-certs\") pod \"controller-6c7b4b5f48-2d8hg\" (UID: \"89bc3754-b51b-44ed-9c94-5d7f074446e2\") " pod="metallb-system/controller-6c7b4b5f48-2d8hg" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.526461 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzwjl\" (UniqueName: \"kubernetes.io/projected/c3fe3e56-b4b2-48c9-9b95-5aa984326faa-kube-api-access-gzwjl\") pod \"frr-k8s-webhook-server-6998585d5-s55w7\" (UID: \"c3fe3e56-b4b2-48c9-9b95-5aa984326faa\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-s55w7" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.559003 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzwjl\" (UniqueName: \"kubernetes.io/projected/c3fe3e56-b4b2-48c9-9b95-5aa984326faa-kube-api-access-gzwjl\") pod \"frr-k8s-webhook-server-6998585d5-s55w7\" (UID: \"c3fe3e56-b4b2-48c9-9b95-5aa984326faa\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-s55w7" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.627206 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89bc3754-b51b-44ed-9c94-5d7f074446e2-metrics-certs\") pod \"controller-6c7b4b5f48-2d8hg\" (UID: \"89bc3754-b51b-44ed-9c94-5d7f074446e2\") " pod="metallb-system/controller-6c7b4b5f48-2d8hg" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.627269 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89bc3754-b51b-44ed-9c94-5d7f074446e2-cert\") pod \"controller-6c7b4b5f48-2d8hg\" (UID: \"89bc3754-b51b-44ed-9c94-5d7f074446e2\") " pod="metallb-system/controller-6c7b4b5f48-2d8hg" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.627314 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-metrics-certs\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.627346 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-memberlist\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.627392 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffdfs\" (UniqueName: \"kubernetes.io/projected/89bc3754-b51b-44ed-9c94-5d7f074446e2-kube-api-access-ffdfs\") pod \"controller-6c7b4b5f48-2d8hg\" (UID: \"89bc3754-b51b-44ed-9c94-5d7f074446e2\") " pod="metallb-system/controller-6c7b4b5f48-2d8hg" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.627422 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-metallb-excludel2\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.627446 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnsrz\" (UniqueName: \"kubernetes.io/projected/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-kube-api-access-jnsrz\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:57 crc kubenswrapper[4719]: E1124 09:06:57.627761 4719 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 24 09:06:57 crc kubenswrapper[4719]: E1124 09:06:57.627811 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-metrics-certs podName:ce9d612a-d5e7-4ab8-809e-97155ecda8ef nodeName:}" failed. No retries permitted until 2025-11-24 09:06:58.127795519 +0000 UTC m=+794.459068771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-metrics-certs") pod "speaker-6lqkr" (UID: "ce9d612a-d5e7-4ab8-809e-97155ecda8ef") : secret "speaker-certs-secret" not found Nov 24 09:06:57 crc kubenswrapper[4719]: E1124 09:06:57.627768 4719 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.628522 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-metallb-excludel2\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:57 crc kubenswrapper[4719]: E1124 09:06:57.628524 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-memberlist podName:ce9d612a-d5e7-4ab8-809e-97155ecda8ef nodeName:}" failed. No retries permitted until 2025-11-24 09:06:58.128483719 +0000 UTC m=+794.459756971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-memberlist") pod "speaker-6lqkr" (UID: "ce9d612a-d5e7-4ab8-809e-97155ecda8ef") : secret "metallb-memberlist" not found Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.632356 4719 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.646376 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89bc3754-b51b-44ed-9c94-5d7f074446e2-cert\") pod \"controller-6c7b4b5f48-2d8hg\" (UID: \"89bc3754-b51b-44ed-9c94-5d7f074446e2\") " pod="metallb-system/controller-6c7b4b5f48-2d8hg" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.648006 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/89bc3754-b51b-44ed-9c94-5d7f074446e2-metrics-certs\") pod \"controller-6c7b4b5f48-2d8hg\" (UID: \"89bc3754-b51b-44ed-9c94-5d7f074446e2\") " pod="metallb-system/controller-6c7b4b5f48-2d8hg" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.652449 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnsrz\" (UniqueName: \"kubernetes.io/projected/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-kube-api-access-jnsrz\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.659115 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffdfs\" (UniqueName: \"kubernetes.io/projected/89bc3754-b51b-44ed-9c94-5d7f074446e2-kube-api-access-ffdfs\") pod \"controller-6c7b4b5f48-2d8hg\" (UID: \"89bc3754-b51b-44ed-9c94-5d7f074446e2\") " pod="metallb-system/controller-6c7b4b5f48-2d8hg" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.789454 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-2d8hg" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.932585 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5dddbe20-c847-452a-ae82-5c12dc74d379-metrics-certs\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:57 crc kubenswrapper[4719]: I1124 09:06:57.937551 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5dddbe20-c847-452a-ae82-5c12dc74d379-metrics-certs\") pod \"frr-k8s-t9glv\" (UID: \"5dddbe20-c847-452a-ae82-5c12dc74d379\") " pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:58 crc kubenswrapper[4719]: I1124 09:06:58.033881 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3fe3e56-b4b2-48c9-9b95-5aa984326faa-cert\") pod \"frr-k8s-webhook-server-6998585d5-s55w7\" (UID: \"c3fe3e56-b4b2-48c9-9b95-5aa984326faa\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-s55w7" Nov 24 09:06:58 crc kubenswrapper[4719]: I1124 09:06:58.039248 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3fe3e56-b4b2-48c9-9b95-5aa984326faa-cert\") pod \"frr-k8s-webhook-server-6998585d5-s55w7\" (UID: \"c3fe3e56-b4b2-48c9-9b95-5aa984326faa\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-s55w7" Nov 24 09:06:58 crc kubenswrapper[4719]: I1124 09:06:58.135898 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-metrics-certs\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:58 crc kubenswrapper[4719]: I1124 09:06:58.135965 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-memberlist\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:58 crc kubenswrapper[4719]: E1124 09:06:58.136131 4719 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 09:06:58 crc kubenswrapper[4719]: E1124 09:06:58.136193 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-memberlist podName:ce9d612a-d5e7-4ab8-809e-97155ecda8ef nodeName:}" failed. No retries permitted until 2025-11-24 09:06:59.136175661 +0000 UTC m=+795.467448913 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-memberlist") pod "speaker-6lqkr" (UID: "ce9d612a-d5e7-4ab8-809e-97155ecda8ef") : secret "metallb-memberlist" not found Nov 24 09:06:58 crc kubenswrapper[4719]: I1124 09:06:58.139511 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-metrics-certs\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:58 crc kubenswrapper[4719]: I1124 09:06:58.187796 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-2d8hg"] Nov 24 09:06:58 crc kubenswrapper[4719]: I1124 09:06:58.218078 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t9glv" Nov 24 09:06:58 crc kubenswrapper[4719]: I1124 09:06:58.232130 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-s55w7" Nov 24 09:06:58 crc kubenswrapper[4719]: I1124 09:06:58.233233 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-2d8hg" event={"ID":"89bc3754-b51b-44ed-9c94-5d7f074446e2","Type":"ContainerStarted","Data":"db0d1874cb4a1daf77386a56372fce68eef0b2027ef8973d5c5c4ed1bdf7c7af"} Nov 24 09:06:58 crc kubenswrapper[4719]: I1124 09:06:58.622006 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-s55w7"] Nov 24 09:06:59 crc kubenswrapper[4719]: I1124 09:06:59.148404 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-memberlist\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:59 crc kubenswrapper[4719]: I1124 09:06:59.158870 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce9d612a-d5e7-4ab8-809e-97155ecda8ef-memberlist\") pod \"speaker-6lqkr\" (UID: \"ce9d612a-d5e7-4ab8-809e-97155ecda8ef\") " pod="metallb-system/speaker-6lqkr" Nov 24 09:06:59 crc kubenswrapper[4719]: I1124 09:06:59.239327 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t9glv" event={"ID":"5dddbe20-c847-452a-ae82-5c12dc74d379","Type":"ContainerStarted","Data":"eab611574f0c52226a61e44c0b80b60f2fb3e57725e7abba163c10452ce18c76"} Nov 24 09:06:59 crc kubenswrapper[4719]: I1124 09:06:59.240355 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-s55w7" event={"ID":"c3fe3e56-b4b2-48c9-9b95-5aa984326faa","Type":"ContainerStarted","Data":"81772229d082ae3799c129a39fdb8b1d071330f49850a5b9ed3c99996c7c3843"} Nov 24 09:06:59 crc kubenswrapper[4719]: I1124 09:06:59.241485 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6lqkr" Nov 24 09:06:59 crc kubenswrapper[4719]: I1124 09:06:59.256968 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-2d8hg" event={"ID":"89bc3754-b51b-44ed-9c94-5d7f074446e2","Type":"ContainerStarted","Data":"5847a3c10e14fb5f3c1db45269f8e37227228f4d94e237ee3fe606c31a188aeb"} Nov 24 09:06:59 crc kubenswrapper[4719]: I1124 09:06:59.257020 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-2d8hg" event={"ID":"89bc3754-b51b-44ed-9c94-5d7f074446e2","Type":"ContainerStarted","Data":"7e1f027d1b2346cf822d46a31197f73cf7e0b724e285f4fbbb9c58f1a0b08e8f"} Nov 24 09:06:59 crc kubenswrapper[4719]: I1124 09:06:59.257224 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-2d8hg" Nov 24 09:06:59 crc kubenswrapper[4719]: I1124 09:06:59.314873 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-2d8hg" podStartSLOduration=2.314852984 podStartE2EDuration="2.314852984s" podCreationTimestamp="2025-11-24 09:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:06:59.311094818 +0000 UTC m=+795.642368090" watchObservedRunningTime="2025-11-24 09:06:59.314852984 +0000 UTC m=+795.646126246" Nov 24 09:07:00 crc kubenswrapper[4719]: I1124 09:07:00.277667 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lqkr" event={"ID":"ce9d612a-d5e7-4ab8-809e-97155ecda8ef","Type":"ContainerStarted","Data":"bf0300db3dfe3ffe495e871c69f5f9505f7ce3d9bc49380a8ffbc10a8808a5ae"} Nov 24 09:07:00 crc kubenswrapper[4719]: I1124 09:07:00.278046 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lqkr" event={"ID":"ce9d612a-d5e7-4ab8-809e-97155ecda8ef","Type":"ContainerStarted","Data":"419b7edc5efa6f6feb78fa64eb3aca37935e8ebc6cbc6890e8580497792ed0d9"} Nov 24 09:07:00 crc kubenswrapper[4719]: I1124 09:07:00.278067 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lqkr" event={"ID":"ce9d612a-d5e7-4ab8-809e-97155ecda8ef","Type":"ContainerStarted","Data":"edc77dbd3e34649498705419186f52e12293027944277b16f40ad3e194839892"} Nov 24 09:07:00 crc kubenswrapper[4719]: I1124 09:07:00.278268 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6lqkr" Nov 24 09:07:00 crc kubenswrapper[4719]: I1124 09:07:00.319571 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6lqkr" podStartSLOduration=3.319546876 podStartE2EDuration="3.319546876s" podCreationTimestamp="2025-11-24 09:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:07:00.311822728 +0000 UTC m=+796.643095990" watchObservedRunningTime="2025-11-24 09:07:00.319546876 +0000 UTC m=+796.650820128" Nov 24 09:07:04 crc kubenswrapper[4719]: I1124 09:07:04.562425 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:07:04 crc kubenswrapper[4719]: I1124 09:07:04.563015 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:07:04 crc kubenswrapper[4719]: I1124 09:07:04.563078 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 09:07:04 crc kubenswrapper[4719]: I1124 09:07:04.563652 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9bafa1ff8cebfd6f7a09482f5227abe69557f213f9dda16fe6ddb7212992d3f"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:07:04 crc kubenswrapper[4719]: I1124 09:07:04.563708 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://e9bafa1ff8cebfd6f7a09482f5227abe69557f213f9dda16fe6ddb7212992d3f" gracePeriod=600 Nov 24 09:07:05 crc kubenswrapper[4719]: I1124 09:07:05.311845 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="e9bafa1ff8cebfd6f7a09482f5227abe69557f213f9dda16fe6ddb7212992d3f" exitCode=0 Nov 24 09:07:05 crc kubenswrapper[4719]: I1124 09:07:05.311881 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"e9bafa1ff8cebfd6f7a09482f5227abe69557f213f9dda16fe6ddb7212992d3f"} Nov 24 09:07:05 crc kubenswrapper[4719]: I1124 09:07:05.311910 4719 scope.go:117] "RemoveContainer" containerID="866b965215eb055030d3994c07592f9bfb5c1f1196954930e0485b0a35bdf8f1" Nov 24 09:07:07 crc kubenswrapper[4719]: I1124 09:07:07.328920 4719 generic.go:334] "Generic (PLEG): container finished" podID="5dddbe20-c847-452a-ae82-5c12dc74d379" containerID="2013c6b52fa53e8938e2b4ae82b838969152e8a75e3718a2bf4a35b220e1d3e7" exitCode=0 Nov 24 09:07:07 crc kubenswrapper[4719]: I1124 09:07:07.328982 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t9glv" event={"ID":"5dddbe20-c847-452a-ae82-5c12dc74d379","Type":"ContainerDied","Data":"2013c6b52fa53e8938e2b4ae82b838969152e8a75e3718a2bf4a35b220e1d3e7"} Nov 24 09:07:07 crc kubenswrapper[4719]: I1124 09:07:07.332181 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-s55w7" event={"ID":"c3fe3e56-b4b2-48c9-9b95-5aa984326faa","Type":"ContainerStarted","Data":"baa4bfa683dc13738078f24ab13e3052119b8d067db886c550f3aae64d4524bf"} Nov 24 09:07:07 crc kubenswrapper[4719]: I1124 09:07:07.332427 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-s55w7" Nov 24 09:07:07 crc kubenswrapper[4719]: I1124 09:07:07.335529 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"c4aeeb69c1ab7122cad95da513920656c5e4ba5b3dd78419e124282e98483b06"} Nov 24 09:07:08 crc kubenswrapper[4719]: I1124 09:07:08.342115 4719 generic.go:334] "Generic (PLEG): container finished" podID="5dddbe20-c847-452a-ae82-5c12dc74d379" containerID="cf2ff8d742f769e82b8101f48b11f69af23a9629ac5288d01df8547d3ca7d4cf" exitCode=0 Nov 24 09:07:08 crc kubenswrapper[4719]: I1124 09:07:08.342186 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t9glv" event={"ID":"5dddbe20-c847-452a-ae82-5c12dc74d379","Type":"ContainerDied","Data":"cf2ff8d742f769e82b8101f48b11f69af23a9629ac5288d01df8547d3ca7d4cf"} Nov 24 09:07:08 crc kubenswrapper[4719]: I1124 09:07:08.365683 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-s55w7" podStartSLOduration=3.637934333 podStartE2EDuration="11.365665781s" podCreationTimestamp="2025-11-24 09:06:57 +0000 UTC" firstStartedPulling="2025-11-24 09:06:58.633481789 +0000 UTC m=+794.964755041" lastFinishedPulling="2025-11-24 09:07:06.361213237 +0000 UTC m=+802.692486489" observedRunningTime="2025-11-24 09:07:07.413517523 +0000 UTC m=+803.744790785" watchObservedRunningTime="2025-11-24 09:07:08.365665781 +0000 UTC m=+804.696939053" Nov 24 09:07:09 crc kubenswrapper[4719]: I1124 09:07:09.247482 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6lqkr" Nov 24 09:07:09 crc kubenswrapper[4719]: I1124 09:07:09.349703 4719 generic.go:334] "Generic (PLEG): container finished" podID="5dddbe20-c847-452a-ae82-5c12dc74d379" containerID="46e5e2124f155c0143ba521edc49717a8fb5f233700c4f810c00e55a19cdf702" exitCode=0 Nov 24 09:07:09 crc kubenswrapper[4719]: I1124 09:07:09.349804 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t9glv" event={"ID":"5dddbe20-c847-452a-ae82-5c12dc74d379","Type":"ContainerDied","Data":"46e5e2124f155c0143ba521edc49717a8fb5f233700c4f810c00e55a19cdf702"} Nov 24 09:07:10 crc kubenswrapper[4719]: I1124 09:07:10.368732 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t9glv" event={"ID":"5dddbe20-c847-452a-ae82-5c12dc74d379","Type":"ContainerStarted","Data":"2e6a8db7a7775518ee06fb173f96244bd09af05ee32c46d83d8fe93798ce3730"} Nov 24 09:07:10 crc kubenswrapper[4719]: I1124 09:07:10.369070 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t9glv" event={"ID":"5dddbe20-c847-452a-ae82-5c12dc74d379","Type":"ContainerStarted","Data":"4271d4ea496e222cae11e408384b2b5beb19557f3175f20798d094b6bbf4adf9"} Nov 24 09:07:10 crc kubenswrapper[4719]: I1124 09:07:10.369086 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t9glv" event={"ID":"5dddbe20-c847-452a-ae82-5c12dc74d379","Type":"ContainerStarted","Data":"c143a32d3cfe1b103bfd70e32568a4f62f364443eeb7e15970e4326ccc0bba2e"} Nov 24 09:07:10 crc kubenswrapper[4719]: I1124 09:07:10.369097 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t9glv" event={"ID":"5dddbe20-c847-452a-ae82-5c12dc74d379","Type":"ContainerStarted","Data":"cd6a1798826e4d4557588a5a1a46429fb23563598323fb31cb3535d04476a2b2"} Nov 24 09:07:10 crc kubenswrapper[4719]: I1124 09:07:10.369107 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t9glv" event={"ID":"5dddbe20-c847-452a-ae82-5c12dc74d379","Type":"ContainerStarted","Data":"fb18bf1953962e6288ae31fab1ed015c6447f337a4258a40f1e2442844755af6"} Nov 24 09:07:11 crc kubenswrapper[4719]: I1124 09:07:11.378628 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t9glv" event={"ID":"5dddbe20-c847-452a-ae82-5c12dc74d379","Type":"ContainerStarted","Data":"c866ba359964729c20cb3ae9a6d36042a76ecc3c43a78f04dda2b3f79363399d"} Nov 24 09:07:11 crc kubenswrapper[4719]: I1124 09:07:11.378939 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-t9glv" Nov 24 09:07:11 crc kubenswrapper[4719]: I1124 09:07:11.402622 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-t9glv" podStartSLOduration=6.496065055 podStartE2EDuration="14.402603731s" podCreationTimestamp="2025-11-24 09:06:57 +0000 UTC" firstStartedPulling="2025-11-24 09:06:58.48933604 +0000 UTC m=+794.820609292" lastFinishedPulling="2025-11-24 09:07:06.395874696 +0000 UTC m=+802.727147968" observedRunningTime="2025-11-24 09:07:11.396424887 +0000 UTC m=+807.727698159" watchObservedRunningTime="2025-11-24 09:07:11.402603731 +0000 UTC m=+807.733876983" Nov 24 09:07:12 crc kubenswrapper[4719]: I1124 09:07:12.130223 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-d8pvd"] Nov 24 09:07:12 crc kubenswrapper[4719]: I1124 09:07:12.130930 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d8pvd" Nov 24 09:07:12 crc kubenswrapper[4719]: I1124 09:07:12.133318 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nl9f9" Nov 24 09:07:12 crc kubenswrapper[4719]: I1124 09:07:12.133485 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 24 09:07:12 crc kubenswrapper[4719]: I1124 09:07:12.133835 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 24 09:07:12 crc kubenswrapper[4719]: I1124 09:07:12.144028 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d8pvd"] Nov 24 09:07:12 crc kubenswrapper[4719]: I1124 09:07:12.255751 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7mcb\" (UniqueName: \"kubernetes.io/projected/4c352914-8077-4b06-8e8d-15b8ff1c018f-kube-api-access-q7mcb\") pod \"openstack-operator-index-d8pvd\" (UID: \"4c352914-8077-4b06-8e8d-15b8ff1c018f\") " pod="openstack-operators/openstack-operator-index-d8pvd" Nov 24 09:07:12 crc kubenswrapper[4719]: I1124 09:07:12.357162 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7mcb\" (UniqueName: \"kubernetes.io/projected/4c352914-8077-4b06-8e8d-15b8ff1c018f-kube-api-access-q7mcb\") pod \"openstack-operator-index-d8pvd\" (UID: \"4c352914-8077-4b06-8e8d-15b8ff1c018f\") " pod="openstack-operators/openstack-operator-index-d8pvd" Nov 24 09:07:12 crc kubenswrapper[4719]: I1124 09:07:12.376184 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7mcb\" (UniqueName: \"kubernetes.io/projected/4c352914-8077-4b06-8e8d-15b8ff1c018f-kube-api-access-q7mcb\") pod \"openstack-operator-index-d8pvd\" (UID: \"4c352914-8077-4b06-8e8d-15b8ff1c018f\") " pod="openstack-operators/openstack-operator-index-d8pvd" Nov 24 09:07:12 crc kubenswrapper[4719]: I1124 09:07:12.448564 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d8pvd" Nov 24 09:07:12 crc kubenswrapper[4719]: I1124 09:07:12.896105 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d8pvd"] Nov 24 09:07:12 crc kubenswrapper[4719]: W1124 09:07:12.906189 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c352914_8077_4b06_8e8d_15b8ff1c018f.slice/crio-33e0821e81891e7935767b41c6e6d7ce0839725fb494bd15e66c800cd564b8eb WatchSource:0}: Error finding container 33e0821e81891e7935767b41c6e6d7ce0839725fb494bd15e66c800cd564b8eb: Status 404 returned error can't find the container with id 33e0821e81891e7935767b41c6e6d7ce0839725fb494bd15e66c800cd564b8eb Nov 24 09:07:13 crc kubenswrapper[4719]: I1124 09:07:13.219090 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-t9glv" Nov 24 09:07:13 crc kubenswrapper[4719]: I1124 09:07:13.265573 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-t9glv" Nov 24 09:07:13 crc kubenswrapper[4719]: I1124 09:07:13.397724 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d8pvd" event={"ID":"4c352914-8077-4b06-8e8d-15b8ff1c018f","Type":"ContainerStarted","Data":"33e0821e81891e7935767b41c6e6d7ce0839725fb494bd15e66c800cd564b8eb"} Nov 24 09:07:15 crc kubenswrapper[4719]: I1124 09:07:15.314549 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-d8pvd"] Nov 24 09:07:15 crc kubenswrapper[4719]: I1124 09:07:15.921697 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-czgfr"] Nov 24 09:07:15 crc kubenswrapper[4719]: I1124 09:07:15.923215 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-czgfr" Nov 24 09:07:15 crc kubenswrapper[4719]: I1124 09:07:15.931023 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-czgfr"] Nov 24 09:07:16 crc kubenswrapper[4719]: I1124 09:07:16.019782 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5j9d\" (UniqueName: \"kubernetes.io/projected/96d6d0aa-864c-432b-a1c1-5eef084a21b1-kube-api-access-g5j9d\") pod \"openstack-operator-index-czgfr\" (UID: \"96d6d0aa-864c-432b-a1c1-5eef084a21b1\") " pod="openstack-operators/openstack-operator-index-czgfr" Nov 24 09:07:16 crc kubenswrapper[4719]: I1124 09:07:16.121762 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5j9d\" (UniqueName: \"kubernetes.io/projected/96d6d0aa-864c-432b-a1c1-5eef084a21b1-kube-api-access-g5j9d\") pod \"openstack-operator-index-czgfr\" (UID: \"96d6d0aa-864c-432b-a1c1-5eef084a21b1\") " pod="openstack-operators/openstack-operator-index-czgfr" Nov 24 09:07:16 crc kubenswrapper[4719]: I1124 09:07:16.139374 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5j9d\" (UniqueName: \"kubernetes.io/projected/96d6d0aa-864c-432b-a1c1-5eef084a21b1-kube-api-access-g5j9d\") pod \"openstack-operator-index-czgfr\" (UID: \"96d6d0aa-864c-432b-a1c1-5eef084a21b1\") " pod="openstack-operators/openstack-operator-index-czgfr" Nov 24 09:07:16 crc kubenswrapper[4719]: I1124 09:07:16.240516 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-czgfr" Nov 24 09:07:16 crc kubenswrapper[4719]: I1124 09:07:16.415682 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d8pvd" event={"ID":"4c352914-8077-4b06-8e8d-15b8ff1c018f","Type":"ContainerStarted","Data":"f2d2210a00c4fc6f23ba071e087c77bd6cf2e54c419203e211a5f194d835ec34"} Nov 24 09:07:16 crc kubenswrapper[4719]: I1124 09:07:16.415831 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-d8pvd" podUID="4c352914-8077-4b06-8e8d-15b8ff1c018f" containerName="registry-server" containerID="cri-o://f2d2210a00c4fc6f23ba071e087c77bd6cf2e54c419203e211a5f194d835ec34" gracePeriod=2 Nov 24 09:07:16 crc kubenswrapper[4719]: I1124 09:07:16.434209 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-d8pvd" podStartSLOduration=2.024930449 podStartE2EDuration="4.43418597s" podCreationTimestamp="2025-11-24 09:07:12 +0000 UTC" firstStartedPulling="2025-11-24 09:07:12.907798132 +0000 UTC m=+809.239071384" lastFinishedPulling="2025-11-24 09:07:15.317053653 +0000 UTC m=+811.648326905" observedRunningTime="2025-11-24 09:07:16.432277046 +0000 UTC m=+812.763550318" watchObservedRunningTime="2025-11-24 09:07:16.43418597 +0000 UTC m=+812.765459232" Nov 24 09:07:16 crc kubenswrapper[4719]: I1124 09:07:16.631852 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-czgfr"] Nov 24 09:07:16 crc kubenswrapper[4719]: I1124 09:07:16.793943 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d8pvd" Nov 24 09:07:16 crc kubenswrapper[4719]: I1124 09:07:16.933575 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7mcb\" (UniqueName: \"kubernetes.io/projected/4c352914-8077-4b06-8e8d-15b8ff1c018f-kube-api-access-q7mcb\") pod \"4c352914-8077-4b06-8e8d-15b8ff1c018f\" (UID: \"4c352914-8077-4b06-8e8d-15b8ff1c018f\") " Nov 24 09:07:16 crc kubenswrapper[4719]: I1124 09:07:16.939291 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c352914-8077-4b06-8e8d-15b8ff1c018f-kube-api-access-q7mcb" (OuterVolumeSpecName: "kube-api-access-q7mcb") pod "4c352914-8077-4b06-8e8d-15b8ff1c018f" (UID: "4c352914-8077-4b06-8e8d-15b8ff1c018f"). InnerVolumeSpecName "kube-api-access-q7mcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:17 crc kubenswrapper[4719]: I1124 09:07:17.035699 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7mcb\" (UniqueName: \"kubernetes.io/projected/4c352914-8077-4b06-8e8d-15b8ff1c018f-kube-api-access-q7mcb\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:17 crc kubenswrapper[4719]: I1124 09:07:17.423377 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-czgfr" event={"ID":"96d6d0aa-864c-432b-a1c1-5eef084a21b1","Type":"ContainerStarted","Data":"a22f66d2ae2a0611a5a838c60f6fc1be8b701333c9c4ea6f2231b01061f3d3da"} Nov 24 09:07:17 crc kubenswrapper[4719]: I1124 09:07:17.423430 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-czgfr" event={"ID":"96d6d0aa-864c-432b-a1c1-5eef084a21b1","Type":"ContainerStarted","Data":"23786c4b40f352af71f304d1b4410820a0eb51f97db6f77827b3bc2ed1fb63cd"} Nov 24 09:07:17 crc kubenswrapper[4719]: I1124 09:07:17.426347 4719 generic.go:334] "Generic (PLEG): container finished" podID="4c352914-8077-4b06-8e8d-15b8ff1c018f" containerID="f2d2210a00c4fc6f23ba071e087c77bd6cf2e54c419203e211a5f194d835ec34" exitCode=0 Nov 24 09:07:17 crc kubenswrapper[4719]: I1124 09:07:17.426391 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d8pvd" Nov 24 09:07:17 crc kubenswrapper[4719]: I1124 09:07:17.426394 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d8pvd" event={"ID":"4c352914-8077-4b06-8e8d-15b8ff1c018f","Type":"ContainerDied","Data":"f2d2210a00c4fc6f23ba071e087c77bd6cf2e54c419203e211a5f194d835ec34"} Nov 24 09:07:17 crc kubenswrapper[4719]: I1124 09:07:17.426565 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d8pvd" event={"ID":"4c352914-8077-4b06-8e8d-15b8ff1c018f","Type":"ContainerDied","Data":"33e0821e81891e7935767b41c6e6d7ce0839725fb494bd15e66c800cd564b8eb"} Nov 24 09:07:17 crc kubenswrapper[4719]: I1124 09:07:17.426611 4719 scope.go:117] "RemoveContainer" containerID="f2d2210a00c4fc6f23ba071e087c77bd6cf2e54c419203e211a5f194d835ec34" Nov 24 09:07:17 crc kubenswrapper[4719]: I1124 09:07:17.445368 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-czgfr" podStartSLOduration=2.396818654 podStartE2EDuration="2.445353044s" podCreationTimestamp="2025-11-24 09:07:15 +0000 UTC" firstStartedPulling="2025-11-24 09:07:16.65099002 +0000 UTC m=+812.982263272" lastFinishedPulling="2025-11-24 09:07:16.69952441 +0000 UTC m=+813.030797662" observedRunningTime="2025-11-24 09:07:17.442578156 +0000 UTC m=+813.773851428" watchObservedRunningTime="2025-11-24 09:07:17.445353044 +0000 UTC m=+813.776626296" Nov 24 09:07:17 crc kubenswrapper[4719]: I1124 09:07:17.464780 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-d8pvd"] Nov 24 09:07:17 crc kubenswrapper[4719]: I1124 09:07:17.466321 4719 scope.go:117] "RemoveContainer" containerID="f2d2210a00c4fc6f23ba071e087c77bd6cf2e54c419203e211a5f194d835ec34" Nov 24 09:07:17 crc kubenswrapper[4719]: E1124 09:07:17.466822 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2d2210a00c4fc6f23ba071e087c77bd6cf2e54c419203e211a5f194d835ec34\": container with ID starting with f2d2210a00c4fc6f23ba071e087c77bd6cf2e54c419203e211a5f194d835ec34 not found: ID does not exist" containerID="f2d2210a00c4fc6f23ba071e087c77bd6cf2e54c419203e211a5f194d835ec34" Nov 24 09:07:17 crc kubenswrapper[4719]: I1124 09:07:17.466860 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2d2210a00c4fc6f23ba071e087c77bd6cf2e54c419203e211a5f194d835ec34"} err="failed to get container status \"f2d2210a00c4fc6f23ba071e087c77bd6cf2e54c419203e211a5f194d835ec34\": rpc error: code = NotFound desc = could not find container \"f2d2210a00c4fc6f23ba071e087c77bd6cf2e54c419203e211a5f194d835ec34\": container with ID starting with f2d2210a00c4fc6f23ba071e087c77bd6cf2e54c419203e211a5f194d835ec34 not found: ID does not exist" Nov 24 09:07:17 crc kubenswrapper[4719]: I1124 09:07:17.468913 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-d8pvd"] Nov 24 09:07:17 crc kubenswrapper[4719]: I1124 09:07:17.794477 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-2d8hg" Nov 24 09:07:18 crc kubenswrapper[4719]: I1124 09:07:18.236807 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-s55w7" Nov 24 09:07:18 crc kubenswrapper[4719]: I1124 09:07:18.526746 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c352914-8077-4b06-8e8d-15b8ff1c018f" path="/var/lib/kubelet/pods/4c352914-8077-4b06-8e8d-15b8ff1c018f/volumes" Nov 24 09:07:26 crc kubenswrapper[4719]: I1124 09:07:26.241116 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-czgfr" Nov 24 09:07:26 crc kubenswrapper[4719]: I1124 09:07:26.241569 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-czgfr" Nov 24 09:07:26 crc kubenswrapper[4719]: I1124 09:07:26.263384 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-czgfr" Nov 24 09:07:26 crc kubenswrapper[4719]: I1124 09:07:26.503105 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-czgfr" Nov 24 09:07:28 crc kubenswrapper[4719]: I1124 09:07:28.221605 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-t9glv" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.161376 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp"] Nov 24 09:07:32 crc kubenswrapper[4719]: E1124 09:07:32.161892 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c352914-8077-4b06-8e8d-15b8ff1c018f" containerName="registry-server" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.161909 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c352914-8077-4b06-8e8d-15b8ff1c018f" containerName="registry-server" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.162049 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c352914-8077-4b06-8e8d-15b8ff1c018f" containerName="registry-server" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.162832 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.165348 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-ch69z" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.171992 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp"] Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.342957 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t96qf\" (UniqueName: \"kubernetes.io/projected/f43e7773-89ab-406b-a3dc-5e20a490eafc-kube-api-access-t96qf\") pod \"eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp\" (UID: \"f43e7773-89ab-406b-a3dc-5e20a490eafc\") " pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.343114 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f43e7773-89ab-406b-a3dc-5e20a490eafc-util\") pod \"eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp\" (UID: \"f43e7773-89ab-406b-a3dc-5e20a490eafc\") " pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.343239 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f43e7773-89ab-406b-a3dc-5e20a490eafc-bundle\") pod \"eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp\" (UID: \"f43e7773-89ab-406b-a3dc-5e20a490eafc\") " pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.445282 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t96qf\" (UniqueName: \"kubernetes.io/projected/f43e7773-89ab-406b-a3dc-5e20a490eafc-kube-api-access-t96qf\") pod \"eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp\" (UID: \"f43e7773-89ab-406b-a3dc-5e20a490eafc\") " pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.445381 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f43e7773-89ab-406b-a3dc-5e20a490eafc-util\") pod \"eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp\" (UID: \"f43e7773-89ab-406b-a3dc-5e20a490eafc\") " pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.445445 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f43e7773-89ab-406b-a3dc-5e20a490eafc-bundle\") pod \"eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp\" (UID: \"f43e7773-89ab-406b-a3dc-5e20a490eafc\") " pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.445982 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f43e7773-89ab-406b-a3dc-5e20a490eafc-util\") pod \"eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp\" (UID: \"f43e7773-89ab-406b-a3dc-5e20a490eafc\") " pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.446052 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f43e7773-89ab-406b-a3dc-5e20a490eafc-bundle\") pod \"eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp\" (UID: \"f43e7773-89ab-406b-a3dc-5e20a490eafc\") " pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.471143 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t96qf\" (UniqueName: \"kubernetes.io/projected/f43e7773-89ab-406b-a3dc-5e20a490eafc-kube-api-access-t96qf\") pod \"eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp\" (UID: \"f43e7773-89ab-406b-a3dc-5e20a490eafc\") " pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.482539 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" Nov 24 09:07:32 crc kubenswrapper[4719]: I1124 09:07:32.893659 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp"] Nov 24 09:07:32 crc kubenswrapper[4719]: W1124 09:07:32.898540 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf43e7773_89ab_406b_a3dc_5e20a490eafc.slice/crio-c80a4b4ba7bdb7656ac188ca310d9a3d86e0876d89ea70127ad0b67907451375 WatchSource:0}: Error finding container c80a4b4ba7bdb7656ac188ca310d9a3d86e0876d89ea70127ad0b67907451375: Status 404 returned error can't find the container with id c80a4b4ba7bdb7656ac188ca310d9a3d86e0876d89ea70127ad0b67907451375 Nov 24 09:07:33 crc kubenswrapper[4719]: I1124 09:07:33.522139 4719 generic.go:334] "Generic (PLEG): container finished" podID="f43e7773-89ab-406b-a3dc-5e20a490eafc" containerID="ef5f6065cc34930498a10f9ef2576fba13886ce803f365972ff8c4a9add68fb0" exitCode=0 Nov 24 09:07:33 crc kubenswrapper[4719]: I1124 09:07:33.522309 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" event={"ID":"f43e7773-89ab-406b-a3dc-5e20a490eafc","Type":"ContainerDied","Data":"ef5f6065cc34930498a10f9ef2576fba13886ce803f365972ff8c4a9add68fb0"} Nov 24 09:07:33 crc kubenswrapper[4719]: I1124 09:07:33.522453 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" event={"ID":"f43e7773-89ab-406b-a3dc-5e20a490eafc","Type":"ContainerStarted","Data":"c80a4b4ba7bdb7656ac188ca310d9a3d86e0876d89ea70127ad0b67907451375"} Nov 24 09:07:34 crc kubenswrapper[4719]: I1124 09:07:34.530310 4719 generic.go:334] "Generic (PLEG): container finished" podID="f43e7773-89ab-406b-a3dc-5e20a490eafc" containerID="fe4b6581639af20a13e470394e56ec8f8cbe57dc4709cc0a53ae98457c157614" exitCode=0 Nov 24 09:07:34 crc kubenswrapper[4719]: I1124 09:07:34.530356 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" event={"ID":"f43e7773-89ab-406b-a3dc-5e20a490eafc","Type":"ContainerDied","Data":"fe4b6581639af20a13e470394e56ec8f8cbe57dc4709cc0a53ae98457c157614"} Nov 24 09:07:35 crc kubenswrapper[4719]: I1124 09:07:35.537620 4719 generic.go:334] "Generic (PLEG): container finished" podID="f43e7773-89ab-406b-a3dc-5e20a490eafc" containerID="52db04cd98ffe8031de202a04cf163f97db514f0509578b59d874c6711493407" exitCode=0 Nov 24 09:07:35 crc kubenswrapper[4719]: I1124 09:07:35.537678 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" event={"ID":"f43e7773-89ab-406b-a3dc-5e20a490eafc","Type":"ContainerDied","Data":"52db04cd98ffe8031de202a04cf163f97db514f0509578b59d874c6711493407"} Nov 24 09:07:36 crc kubenswrapper[4719]: I1124 09:07:36.803924 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" Nov 24 09:07:36 crc kubenswrapper[4719]: I1124 09:07:36.906527 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f43e7773-89ab-406b-a3dc-5e20a490eafc-bundle\") pod \"f43e7773-89ab-406b-a3dc-5e20a490eafc\" (UID: \"f43e7773-89ab-406b-a3dc-5e20a490eafc\") " Nov 24 09:07:36 crc kubenswrapper[4719]: I1124 09:07:36.906656 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f43e7773-89ab-406b-a3dc-5e20a490eafc-util\") pod \"f43e7773-89ab-406b-a3dc-5e20a490eafc\" (UID: \"f43e7773-89ab-406b-a3dc-5e20a490eafc\") " Nov 24 09:07:36 crc kubenswrapper[4719]: I1124 09:07:36.906710 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t96qf\" (UniqueName: \"kubernetes.io/projected/f43e7773-89ab-406b-a3dc-5e20a490eafc-kube-api-access-t96qf\") pod \"f43e7773-89ab-406b-a3dc-5e20a490eafc\" (UID: \"f43e7773-89ab-406b-a3dc-5e20a490eafc\") " Nov 24 09:07:36 crc kubenswrapper[4719]: I1124 09:07:36.907514 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f43e7773-89ab-406b-a3dc-5e20a490eafc-bundle" (OuterVolumeSpecName: "bundle") pod "f43e7773-89ab-406b-a3dc-5e20a490eafc" (UID: "f43e7773-89ab-406b-a3dc-5e20a490eafc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:07:36 crc kubenswrapper[4719]: I1124 09:07:36.921411 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f43e7773-89ab-406b-a3dc-5e20a490eafc-kube-api-access-t96qf" (OuterVolumeSpecName: "kube-api-access-t96qf") pod "f43e7773-89ab-406b-a3dc-5e20a490eafc" (UID: "f43e7773-89ab-406b-a3dc-5e20a490eafc"). InnerVolumeSpecName "kube-api-access-t96qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:36 crc kubenswrapper[4719]: I1124 09:07:36.922069 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f43e7773-89ab-406b-a3dc-5e20a490eafc-util" (OuterVolumeSpecName: "util") pod "f43e7773-89ab-406b-a3dc-5e20a490eafc" (UID: "f43e7773-89ab-406b-a3dc-5e20a490eafc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:07:37 crc kubenswrapper[4719]: I1124 09:07:37.008081 4719 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f43e7773-89ab-406b-a3dc-5e20a490eafc-util\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:37 crc kubenswrapper[4719]: I1124 09:07:37.008122 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t96qf\" (UniqueName: \"kubernetes.io/projected/f43e7773-89ab-406b-a3dc-5e20a490eafc-kube-api-access-t96qf\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:37 crc kubenswrapper[4719]: I1124 09:07:37.008135 4719 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f43e7773-89ab-406b-a3dc-5e20a490eafc-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:37 crc kubenswrapper[4719]: I1124 09:07:37.564851 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" event={"ID":"f43e7773-89ab-406b-a3dc-5e20a490eafc","Type":"ContainerDied","Data":"c80a4b4ba7bdb7656ac188ca310d9a3d86e0876d89ea70127ad0b67907451375"} Nov 24 09:07:37 crc kubenswrapper[4719]: I1124 09:07:37.564886 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c80a4b4ba7bdb7656ac188ca310d9a3d86e0876d89ea70127ad0b67907451375" Nov 24 09:07:37 crc kubenswrapper[4719]: I1124 09:07:37.564894 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.034786 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-phjk5"] Nov 24 09:07:41 crc kubenswrapper[4719]: E1124 09:07:41.035352 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43e7773-89ab-406b-a3dc-5e20a490eafc" containerName="extract" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.035369 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43e7773-89ab-406b-a3dc-5e20a490eafc" containerName="extract" Nov 24 09:07:41 crc kubenswrapper[4719]: E1124 09:07:41.035385 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43e7773-89ab-406b-a3dc-5e20a490eafc" containerName="pull" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.035395 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43e7773-89ab-406b-a3dc-5e20a490eafc" containerName="pull" Nov 24 09:07:41 crc kubenswrapper[4719]: E1124 09:07:41.035413 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43e7773-89ab-406b-a3dc-5e20a490eafc" containerName="util" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.035421 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43e7773-89ab-406b-a3dc-5e20a490eafc" containerName="util" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.035550 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43e7773-89ab-406b-a3dc-5e20a490eafc" containerName="extract" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.036545 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.058494 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-phjk5"] Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.061991 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44qw\" (UniqueName: \"kubernetes.io/projected/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-kube-api-access-w44qw\") pod \"redhat-marketplace-phjk5\" (UID: \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\") " pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.062119 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-utilities\") pod \"redhat-marketplace-phjk5\" (UID: \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\") " pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.062165 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-catalog-content\") pod \"redhat-marketplace-phjk5\" (UID: \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\") " pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.163157 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-utilities\") pod \"redhat-marketplace-phjk5\" (UID: \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\") " pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.163232 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-catalog-content\") pod \"redhat-marketplace-phjk5\" (UID: \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\") " pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.163303 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w44qw\" (UniqueName: \"kubernetes.io/projected/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-kube-api-access-w44qw\") pod \"redhat-marketplace-phjk5\" (UID: \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\") " pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.163727 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-utilities\") pod \"redhat-marketplace-phjk5\" (UID: \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\") " pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.163798 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-catalog-content\") pod \"redhat-marketplace-phjk5\" (UID: \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\") " pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.187600 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w44qw\" (UniqueName: \"kubernetes.io/projected/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-kube-api-access-w44qw\") pod \"redhat-marketplace-phjk5\" (UID: \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\") " pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.352927 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:41 crc kubenswrapper[4719]: I1124 09:07:41.824691 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-phjk5"] Nov 24 09:07:42 crc kubenswrapper[4719]: I1124 09:07:42.593600 4719 generic.go:334] "Generic (PLEG): container finished" podID="ae2d5a68-6f57-472d-87f2-e71cecaeba2c" containerID="637552e375c7ac2495b63c3b574738e9f25ffa10310bfcda37524175ea1a96b2" exitCode=0 Nov 24 09:07:42 crc kubenswrapper[4719]: I1124 09:07:42.593683 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phjk5" event={"ID":"ae2d5a68-6f57-472d-87f2-e71cecaeba2c","Type":"ContainerDied","Data":"637552e375c7ac2495b63c3b574738e9f25ffa10310bfcda37524175ea1a96b2"} Nov 24 09:07:42 crc kubenswrapper[4719]: I1124 09:07:42.593957 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phjk5" event={"ID":"ae2d5a68-6f57-472d-87f2-e71cecaeba2c","Type":"ContainerStarted","Data":"4122187fbf3d47a62caf053301c3ae13120a4a21c5c0c531fe6121d06f880f47"} Nov 24 09:07:43 crc kubenswrapper[4719]: I1124 09:07:43.531145 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-56cb4fc9f6-bx26b"] Nov 24 09:07:43 crc kubenswrapper[4719]: I1124 09:07:43.532028 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-56cb4fc9f6-bx26b" Nov 24 09:07:43 crc kubenswrapper[4719]: I1124 09:07:43.535510 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-csd99" Nov 24 09:07:43 crc kubenswrapper[4719]: I1124 09:07:43.564924 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-56cb4fc9f6-bx26b"] Nov 24 09:07:43 crc kubenswrapper[4719]: I1124 09:07:43.605554 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phjk5" event={"ID":"ae2d5a68-6f57-472d-87f2-e71cecaeba2c","Type":"ContainerStarted","Data":"642238bac68d537ee1cae533002d8df529d1e450fdb48b8560bf6eccb2a0ee37"} Nov 24 09:07:43 crc kubenswrapper[4719]: I1124 09:07:43.693225 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s42xg\" (UniqueName: \"kubernetes.io/projected/2065277b-46c2-4b27-9458-f671c1319c76-kube-api-access-s42xg\") pod \"openstack-operator-controller-operator-56cb4fc9f6-bx26b\" (UID: \"2065277b-46c2-4b27-9458-f671c1319c76\") " pod="openstack-operators/openstack-operator-controller-operator-56cb4fc9f6-bx26b" Nov 24 09:07:43 crc kubenswrapper[4719]: I1124 09:07:43.794605 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s42xg\" (UniqueName: \"kubernetes.io/projected/2065277b-46c2-4b27-9458-f671c1319c76-kube-api-access-s42xg\") pod \"openstack-operator-controller-operator-56cb4fc9f6-bx26b\" (UID: \"2065277b-46c2-4b27-9458-f671c1319c76\") " pod="openstack-operators/openstack-operator-controller-operator-56cb4fc9f6-bx26b" Nov 24 09:07:43 crc kubenswrapper[4719]: I1124 09:07:43.812449 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s42xg\" (UniqueName: \"kubernetes.io/projected/2065277b-46c2-4b27-9458-f671c1319c76-kube-api-access-s42xg\") pod \"openstack-operator-controller-operator-56cb4fc9f6-bx26b\" (UID: \"2065277b-46c2-4b27-9458-f671c1319c76\") " pod="openstack-operators/openstack-operator-controller-operator-56cb4fc9f6-bx26b" Nov 24 09:07:43 crc kubenswrapper[4719]: I1124 09:07:43.846839 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-56cb4fc9f6-bx26b" Nov 24 09:07:44 crc kubenswrapper[4719]: I1124 09:07:44.284823 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-56cb4fc9f6-bx26b"] Nov 24 09:07:44 crc kubenswrapper[4719]: W1124 09:07:44.293397 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2065277b_46c2_4b27_9458_f671c1319c76.slice/crio-78ffec8bd3836e4c671b8bde959bec29cdd708c9f7dd5921d44b4391042c434b WatchSource:0}: Error finding container 78ffec8bd3836e4c671b8bde959bec29cdd708c9f7dd5921d44b4391042c434b: Status 404 returned error can't find the container with id 78ffec8bd3836e4c671b8bde959bec29cdd708c9f7dd5921d44b4391042c434b Nov 24 09:07:44 crc kubenswrapper[4719]: I1124 09:07:44.613747 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-56cb4fc9f6-bx26b" event={"ID":"2065277b-46c2-4b27-9458-f671c1319c76","Type":"ContainerStarted","Data":"78ffec8bd3836e4c671b8bde959bec29cdd708c9f7dd5921d44b4391042c434b"} Nov 24 09:07:44 crc kubenswrapper[4719]: I1124 09:07:44.617082 4719 generic.go:334] "Generic (PLEG): container finished" podID="ae2d5a68-6f57-472d-87f2-e71cecaeba2c" containerID="642238bac68d537ee1cae533002d8df529d1e450fdb48b8560bf6eccb2a0ee37" exitCode=0 Nov 24 09:07:44 crc kubenswrapper[4719]: I1124 09:07:44.617134 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phjk5" event={"ID":"ae2d5a68-6f57-472d-87f2-e71cecaeba2c","Type":"ContainerDied","Data":"642238bac68d537ee1cae533002d8df529d1e450fdb48b8560bf6eccb2a0ee37"} Nov 24 09:07:45 crc kubenswrapper[4719]: I1124 09:07:45.641245 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phjk5" event={"ID":"ae2d5a68-6f57-472d-87f2-e71cecaeba2c","Type":"ContainerStarted","Data":"d83876704aa22d14b0c8e90521f424c7ff0a3bda52cedfb8c5b8c942d2d7d69f"} Nov 24 09:07:45 crc kubenswrapper[4719]: I1124 09:07:45.661600 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-phjk5" podStartSLOduration=1.968027617 podStartE2EDuration="4.661583024s" podCreationTimestamp="2025-11-24 09:07:41 +0000 UTC" firstStartedPulling="2025-11-24 09:07:42.595254964 +0000 UTC m=+838.926528216" lastFinishedPulling="2025-11-24 09:07:45.288810381 +0000 UTC m=+841.620083623" observedRunningTime="2025-11-24 09:07:45.66036588 +0000 UTC m=+841.991639132" watchObservedRunningTime="2025-11-24 09:07:45.661583024 +0000 UTC m=+841.992856276" Nov 24 09:07:49 crc kubenswrapper[4719]: I1124 09:07:49.670258 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-56cb4fc9f6-bx26b" event={"ID":"2065277b-46c2-4b27-9458-f671c1319c76","Type":"ContainerStarted","Data":"2ee2af894fa601808715cdbeecdf747386bdc47c9245d66e3c9bf7d4408fdbf0"} Nov 24 09:07:51 crc kubenswrapper[4719]: I1124 09:07:51.353919 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:51 crc kubenswrapper[4719]: I1124 09:07:51.354262 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:51 crc kubenswrapper[4719]: I1124 09:07:51.410783 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:51 crc kubenswrapper[4719]: I1124 09:07:51.747870 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:52 crc kubenswrapper[4719]: I1124 09:07:52.413333 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-phjk5"] Nov 24 09:07:52 crc kubenswrapper[4719]: I1124 09:07:52.709526 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-56cb4fc9f6-bx26b" event={"ID":"2065277b-46c2-4b27-9458-f671c1319c76","Type":"ContainerStarted","Data":"602fa9ce205f5170ccef0f8d0ff4d93dcfb41c1b5b8cf293bad49dc663bfc4ea"} Nov 24 09:07:53 crc kubenswrapper[4719]: I1124 09:07:53.718835 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-56cb4fc9f6-bx26b" Nov 24 09:07:53 crc kubenswrapper[4719]: I1124 09:07:53.719284 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-phjk5" podUID="ae2d5a68-6f57-472d-87f2-e71cecaeba2c" containerName="registry-server" containerID="cri-o://d83876704aa22d14b0c8e90521f424c7ff0a3bda52cedfb8c5b8c942d2d7d69f" gracePeriod=2 Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.081342 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.100184 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-56cb4fc9f6-bx26b" podStartSLOduration=3.724020204 podStartE2EDuration="11.100164376s" podCreationTimestamp="2025-11-24 09:07:43 +0000 UTC" firstStartedPulling="2025-11-24 09:07:44.294830192 +0000 UTC m=+840.626103454" lastFinishedPulling="2025-11-24 09:07:51.670974374 +0000 UTC m=+848.002247626" observedRunningTime="2025-11-24 09:07:52.741813523 +0000 UTC m=+849.073086795" watchObservedRunningTime="2025-11-24 09:07:54.100164376 +0000 UTC m=+850.431437628" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.239440 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-utilities\") pod \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\" (UID: \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\") " Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.239523 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-catalog-content\") pod \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\" (UID: \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\") " Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.239616 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w44qw\" (UniqueName: \"kubernetes.io/projected/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-kube-api-access-w44qw\") pod \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\" (UID: \"ae2d5a68-6f57-472d-87f2-e71cecaeba2c\") " Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.240355 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-utilities" (OuterVolumeSpecName: "utilities") pod "ae2d5a68-6f57-472d-87f2-e71cecaeba2c" (UID: "ae2d5a68-6f57-472d-87f2-e71cecaeba2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.248190 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-kube-api-access-w44qw" (OuterVolumeSpecName: "kube-api-access-w44qw") pod "ae2d5a68-6f57-472d-87f2-e71cecaeba2c" (UID: "ae2d5a68-6f57-472d-87f2-e71cecaeba2c"). InnerVolumeSpecName "kube-api-access-w44qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.263380 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae2d5a68-6f57-472d-87f2-e71cecaeba2c" (UID: "ae2d5a68-6f57-472d-87f2-e71cecaeba2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.341170 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.341218 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w44qw\" (UniqueName: \"kubernetes.io/projected/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-kube-api-access-w44qw\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.341233 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae2d5a68-6f57-472d-87f2-e71cecaeba2c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:07:54 crc kubenswrapper[4719]: E1124 09:07:54.646679 4719 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae2d5a68_6f57_472d_87f2_e71cecaeba2c.slice/crio-4122187fbf3d47a62caf053301c3ae13120a4a21c5c0c531fe6121d06f880f47\": RecentStats: unable to find data in memory cache]" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.724669 4719 generic.go:334] "Generic (PLEG): container finished" podID="ae2d5a68-6f57-472d-87f2-e71cecaeba2c" containerID="d83876704aa22d14b0c8e90521f424c7ff0a3bda52cedfb8c5b8c942d2d7d69f" exitCode=0 Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.724728 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phjk5" event={"ID":"ae2d5a68-6f57-472d-87f2-e71cecaeba2c","Type":"ContainerDied","Data":"d83876704aa22d14b0c8e90521f424c7ff0a3bda52cedfb8c5b8c942d2d7d69f"} Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.724782 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phjk5" event={"ID":"ae2d5a68-6f57-472d-87f2-e71cecaeba2c","Type":"ContainerDied","Data":"4122187fbf3d47a62caf053301c3ae13120a4a21c5c0c531fe6121d06f880f47"} Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.724780 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phjk5" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.724804 4719 scope.go:117] "RemoveContainer" containerID="d83876704aa22d14b0c8e90521f424c7ff0a3bda52cedfb8c5b8c942d2d7d69f" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.729944 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-56cb4fc9f6-bx26b" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.743446 4719 scope.go:117] "RemoveContainer" containerID="642238bac68d537ee1cae533002d8df529d1e450fdb48b8560bf6eccb2a0ee37" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.760141 4719 scope.go:117] "RemoveContainer" containerID="637552e375c7ac2495b63c3b574738e9f25ffa10310bfcda37524175ea1a96b2" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.780255 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-phjk5"] Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.789106 4719 scope.go:117] "RemoveContainer" containerID="d83876704aa22d14b0c8e90521f424c7ff0a3bda52cedfb8c5b8c942d2d7d69f" Nov 24 09:07:54 crc kubenswrapper[4719]: E1124 09:07:54.789479 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d83876704aa22d14b0c8e90521f424c7ff0a3bda52cedfb8c5b8c942d2d7d69f\": container with ID starting with d83876704aa22d14b0c8e90521f424c7ff0a3bda52cedfb8c5b8c942d2d7d69f not found: ID does not exist" containerID="d83876704aa22d14b0c8e90521f424c7ff0a3bda52cedfb8c5b8c942d2d7d69f" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.789513 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d83876704aa22d14b0c8e90521f424c7ff0a3bda52cedfb8c5b8c942d2d7d69f"} err="failed to get container status \"d83876704aa22d14b0c8e90521f424c7ff0a3bda52cedfb8c5b8c942d2d7d69f\": rpc error: code = NotFound desc = could not find container \"d83876704aa22d14b0c8e90521f424c7ff0a3bda52cedfb8c5b8c942d2d7d69f\": container with ID starting with d83876704aa22d14b0c8e90521f424c7ff0a3bda52cedfb8c5b8c942d2d7d69f not found: ID does not exist" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.789540 4719 scope.go:117] "RemoveContainer" containerID="642238bac68d537ee1cae533002d8df529d1e450fdb48b8560bf6eccb2a0ee37" Nov 24 09:07:54 crc kubenswrapper[4719]: E1124 09:07:54.789743 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"642238bac68d537ee1cae533002d8df529d1e450fdb48b8560bf6eccb2a0ee37\": container with ID starting with 642238bac68d537ee1cae533002d8df529d1e450fdb48b8560bf6eccb2a0ee37 not found: ID does not exist" containerID="642238bac68d537ee1cae533002d8df529d1e450fdb48b8560bf6eccb2a0ee37" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.789837 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"642238bac68d537ee1cae533002d8df529d1e450fdb48b8560bf6eccb2a0ee37"} err="failed to get container status \"642238bac68d537ee1cae533002d8df529d1e450fdb48b8560bf6eccb2a0ee37\": rpc error: code = NotFound desc = could not find container \"642238bac68d537ee1cae533002d8df529d1e450fdb48b8560bf6eccb2a0ee37\": container with ID starting with 642238bac68d537ee1cae533002d8df529d1e450fdb48b8560bf6eccb2a0ee37 not found: ID does not exist" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.789933 4719 scope.go:117] "RemoveContainer" containerID="637552e375c7ac2495b63c3b574738e9f25ffa10310bfcda37524175ea1a96b2" Nov 24 09:07:54 crc kubenswrapper[4719]: E1124 09:07:54.790255 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"637552e375c7ac2495b63c3b574738e9f25ffa10310bfcda37524175ea1a96b2\": container with ID starting with 637552e375c7ac2495b63c3b574738e9f25ffa10310bfcda37524175ea1a96b2 not found: ID does not exist" containerID="637552e375c7ac2495b63c3b574738e9f25ffa10310bfcda37524175ea1a96b2" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.790281 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"637552e375c7ac2495b63c3b574738e9f25ffa10310bfcda37524175ea1a96b2"} err="failed to get container status \"637552e375c7ac2495b63c3b574738e9f25ffa10310bfcda37524175ea1a96b2\": rpc error: code = NotFound desc = could not find container \"637552e375c7ac2495b63c3b574738e9f25ffa10310bfcda37524175ea1a96b2\": container with ID starting with 637552e375c7ac2495b63c3b574738e9f25ffa10310bfcda37524175ea1a96b2 not found: ID does not exist" Nov 24 09:07:54 crc kubenswrapper[4719]: I1124 09:07:54.791947 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-phjk5"] Nov 24 09:07:56 crc kubenswrapper[4719]: I1124 09:07:56.529783 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae2d5a68-6f57-472d-87f2-e71cecaeba2c" path="/var/lib/kubelet/pods/ae2d5a68-6f57-472d-87f2-e71cecaeba2c/volumes" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.194021 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8jvrw"] Nov 24 09:08:10 crc kubenswrapper[4719]: E1124 09:08:10.194678 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2d5a68-6f57-472d-87f2-e71cecaeba2c" containerName="extract-utilities" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.194689 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2d5a68-6f57-472d-87f2-e71cecaeba2c" containerName="extract-utilities" Nov 24 09:08:10 crc kubenswrapper[4719]: E1124 09:08:10.194698 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2d5a68-6f57-472d-87f2-e71cecaeba2c" containerName="registry-server" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.194705 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2d5a68-6f57-472d-87f2-e71cecaeba2c" containerName="registry-server" Nov 24 09:08:10 crc kubenswrapper[4719]: E1124 09:08:10.194721 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2d5a68-6f57-472d-87f2-e71cecaeba2c" containerName="extract-content" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.194726 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2d5a68-6f57-472d-87f2-e71cecaeba2c" containerName="extract-content" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.194827 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae2d5a68-6f57-472d-87f2-e71cecaeba2c" containerName="registry-server" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.195579 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.202109 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jvrw"] Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.250468 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66c736b-2b05-4c57-9518-a76a3d9f6e13-utilities\") pod \"certified-operators-8jvrw\" (UID: \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\") " pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.250977 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66c736b-2b05-4c57-9518-a76a3d9f6e13-catalog-content\") pod \"certified-operators-8jvrw\" (UID: \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\") " pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.251104 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpqjv\" (UniqueName: \"kubernetes.io/projected/b66c736b-2b05-4c57-9518-a76a3d9f6e13-kube-api-access-wpqjv\") pod \"certified-operators-8jvrw\" (UID: \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\") " pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.353943 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66c736b-2b05-4c57-9518-a76a3d9f6e13-utilities\") pod \"certified-operators-8jvrw\" (UID: \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\") " pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.354009 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66c736b-2b05-4c57-9518-a76a3d9f6e13-catalog-content\") pod \"certified-operators-8jvrw\" (UID: \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\") " pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.354053 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpqjv\" (UniqueName: \"kubernetes.io/projected/b66c736b-2b05-4c57-9518-a76a3d9f6e13-kube-api-access-wpqjv\") pod \"certified-operators-8jvrw\" (UID: \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\") " pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.354688 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66c736b-2b05-4c57-9518-a76a3d9f6e13-utilities\") pod \"certified-operators-8jvrw\" (UID: \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\") " pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.354717 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66c736b-2b05-4c57-9518-a76a3d9f6e13-catalog-content\") pod \"certified-operators-8jvrw\" (UID: \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\") " pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.373447 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpqjv\" (UniqueName: \"kubernetes.io/projected/b66c736b-2b05-4c57-9518-a76a3d9f6e13-kube-api-access-wpqjv\") pod \"certified-operators-8jvrw\" (UID: \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\") " pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:10 crc kubenswrapper[4719]: I1124 09:08:10.526051 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.145739 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jvrw"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.376859 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-6hhz5"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.379064 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-6hhz5" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.387001 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-6hhz5"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.391324 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-g7p64" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.427096 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-sf5qt"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.428352 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-sf5qt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.435236 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zj92d" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.458729 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-sf5qt"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.479673 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-tjjkt"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.480690 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-tjjkt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.484228 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsz2q\" (UniqueName: \"kubernetes.io/projected/a2d8f034-9b1e-4b62-9c3a-ffc0c0379ad1-kube-api-access-gsz2q\") pod \"barbican-operator-controller-manager-75fb479bcc-6hhz5\" (UID: \"a2d8f034-9b1e-4b62-9c3a-ffc0c0379ad1\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-6hhz5" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.484269 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk42q\" (UniqueName: \"kubernetes.io/projected/064a4ed4-46e3-4daf-8a9d-21c8475ba687-kube-api-access-fk42q\") pod \"cinder-operator-controller-manager-6498cbf48f-sf5qt\" (UID: \"064a4ed4-46e3-4daf-8a9d-21c8475ba687\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-sf5qt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.484402 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-p7cxp" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.489554 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.490535 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.493760 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-j872z" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.525871 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.535376 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-j22wh"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.536569 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-j22wh" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.540346 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-2vqnd" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.547824 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-tjjkt"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.562939 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-c9h59"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.563926 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-c9h59" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.587747 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntfc5\" (UniqueName: \"kubernetes.io/projected/9d35d376-e7fb-41da-bf47-efd2e5f3ea57-kube-api-access-ntfc5\") pod \"designate-operator-controller-manager-767ccfd65f-tjjkt\" (UID: \"9d35d376-e7fb-41da-bf47-efd2e5f3ea57\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-tjjkt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.587807 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq89l\" (UniqueName: \"kubernetes.io/projected/5dce0610-7470-47d2-ae74-ca7fccb82b1f-kube-api-access-vq89l\") pod \"glance-operator-controller-manager-7969689c84-c9h59\" (UID: \"5dce0610-7470-47d2-ae74-ca7fccb82b1f\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-c9h59" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.587850 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zff5h\" (UniqueName: \"kubernetes.io/projected/5a2058d2-1589-484e-a5a1-de7e31af1a63-kube-api-access-zff5h\") pod \"heat-operator-controller-manager-56f54d6746-xkfjt\" (UID: \"5a2058d2-1589-484e-a5a1-de7e31af1a63\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.587877 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmkx4\" (UniqueName: \"kubernetes.io/projected/9d835ba0-d338-45db-b417-7087d4cced01-kube-api-access-nmkx4\") pod \"horizon-operator-controller-manager-598f69df5d-j22wh\" (UID: \"9d835ba0-d338-45db-b417-7087d4cced01\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-j22wh" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.587936 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsz2q\" (UniqueName: \"kubernetes.io/projected/a2d8f034-9b1e-4b62-9c3a-ffc0c0379ad1-kube-api-access-gsz2q\") pod \"barbican-operator-controller-manager-75fb479bcc-6hhz5\" (UID: \"a2d8f034-9b1e-4b62-9c3a-ffc0c0379ad1\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-6hhz5" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.587963 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk42q\" (UniqueName: \"kubernetes.io/projected/064a4ed4-46e3-4daf-8a9d-21c8475ba687-kube-api-access-fk42q\") pod \"cinder-operator-controller-manager-6498cbf48f-sf5qt\" (UID: \"064a4ed4-46e3-4daf-8a9d-21c8475ba687\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-sf5qt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.606743 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6jpkq" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.621095 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nhwqm"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.622554 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.634738 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-c9h59"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.642224 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsz2q\" (UniqueName: \"kubernetes.io/projected/a2d8f034-9b1e-4b62-9c3a-ffc0c0379ad1-kube-api-access-gsz2q\") pod \"barbican-operator-controller-manager-75fb479bcc-6hhz5\" (UID: \"a2d8f034-9b1e-4b62-9c3a-ffc0c0379ad1\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-6hhz5" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.648095 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-j22wh"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.660106 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-4sxvh"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.661100 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-4sxvh" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.665616 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-psdhq" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.669109 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.670129 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.684433 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-c5w9f" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.684774 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.687572 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nhwqm"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.690430 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-fhb77\" (UID: \"08979ac6-d1d0-4ef7-8996-5b02e8e8dae6\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.690493 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntfc5\" (UniqueName: \"kubernetes.io/projected/9d35d376-e7fb-41da-bf47-efd2e5f3ea57-kube-api-access-ntfc5\") pod \"designate-operator-controller-manager-767ccfd65f-tjjkt\" (UID: \"9d35d376-e7fb-41da-bf47-efd2e5f3ea57\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-tjjkt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.690522 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9drtn\" (UniqueName: \"kubernetes.io/projected/231d0c7b-d43e-4169-8b4e-940289894809-kube-api-access-9drtn\") pod \"ironic-operator-controller-manager-99b499f4-4sxvh\" (UID: \"231d0c7b-d43e-4169-8b4e-940289894809\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-4sxvh" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.690542 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq89l\" (UniqueName: \"kubernetes.io/projected/5dce0610-7470-47d2-ae74-ca7fccb82b1f-kube-api-access-vq89l\") pod \"glance-operator-controller-manager-7969689c84-c9h59\" (UID: \"5dce0610-7470-47d2-ae74-ca7fccb82b1f\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-c9h59" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.690566 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zff5h\" (UniqueName: \"kubernetes.io/projected/5a2058d2-1589-484e-a5a1-de7e31af1a63-kube-api-access-zff5h\") pod \"heat-operator-controller-manager-56f54d6746-xkfjt\" (UID: \"5a2058d2-1589-484e-a5a1-de7e31af1a63\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.690589 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmkx4\" (UniqueName: \"kubernetes.io/projected/9d835ba0-d338-45db-b417-7087d4cced01-kube-api-access-nmkx4\") pod \"horizon-operator-controller-manager-598f69df5d-j22wh\" (UID: \"9d835ba0-d338-45db-b417-7087d4cced01\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-j22wh" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.690622 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwf5f\" (UniqueName: \"kubernetes.io/projected/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-kube-api-access-qwf5f\") pod \"community-operators-nhwqm\" (UID: \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\") " pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.690651 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-utilities\") pod \"community-operators-nhwqm\" (UID: \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\") " pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.690673 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sp8j\" (UniqueName: \"kubernetes.io/projected/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-kube-api-access-9sp8j\") pod \"infra-operator-controller-manager-6dd8864d7c-fhb77\" (UID: \"08979ac6-d1d0-4ef7-8996-5b02e8e8dae6\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.690697 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-catalog-content\") pod \"community-operators-nhwqm\" (UID: \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\") " pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.695786 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-6hhz5" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.703676 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.719727 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.719883 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.732473 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-7clh6" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.742654 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk42q\" (UniqueName: \"kubernetes.io/projected/064a4ed4-46e3-4daf-8a9d-21c8475ba687-kube-api-access-fk42q\") pod \"cinder-operator-controller-manager-6498cbf48f-sf5qt\" (UID: \"064a4ed4-46e3-4daf-8a9d-21c8475ba687\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-sf5qt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.772873 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmkx4\" (UniqueName: \"kubernetes.io/projected/9d835ba0-d338-45db-b417-7087d4cced01-kube-api-access-nmkx4\") pod \"horizon-operator-controller-manager-598f69df5d-j22wh\" (UID: \"9d835ba0-d338-45db-b417-7087d4cced01\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-j22wh" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.773485 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntfc5\" (UniqueName: \"kubernetes.io/projected/9d35d376-e7fb-41da-bf47-efd2e5f3ea57-kube-api-access-ntfc5\") pod \"designate-operator-controller-manager-767ccfd65f-tjjkt\" (UID: \"9d35d376-e7fb-41da-bf47-efd2e5f3ea57\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-tjjkt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.776661 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zff5h\" (UniqueName: \"kubernetes.io/projected/5a2058d2-1589-484e-a5a1-de7e31af1a63-kube-api-access-zff5h\") pod \"heat-operator-controller-manager-56f54d6746-xkfjt\" (UID: \"5a2058d2-1589-484e-a5a1-de7e31af1a63\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.776739 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.776811 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq89l\" (UniqueName: \"kubernetes.io/projected/5dce0610-7470-47d2-ae74-ca7fccb82b1f-kube-api-access-vq89l\") pod \"glance-operator-controller-manager-7969689c84-c9h59\" (UID: \"5dce0610-7470-47d2-ae74-ca7fccb82b1f\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-c9h59" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.788359 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-sf5qt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.793835 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-utilities\") pod \"community-operators-nhwqm\" (UID: \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\") " pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.793880 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sp8j\" (UniqueName: \"kubernetes.io/projected/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-kube-api-access-9sp8j\") pod \"infra-operator-controller-manager-6dd8864d7c-fhb77\" (UID: \"08979ac6-d1d0-4ef7-8996-5b02e8e8dae6\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.793912 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-catalog-content\") pod \"community-operators-nhwqm\" (UID: \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\") " pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.793935 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-fhb77\" (UID: \"08979ac6-d1d0-4ef7-8996-5b02e8e8dae6\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.793980 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9drtn\" (UniqueName: \"kubernetes.io/projected/231d0c7b-d43e-4169-8b4e-940289894809-kube-api-access-9drtn\") pod \"ironic-operator-controller-manager-99b499f4-4sxvh\" (UID: \"231d0c7b-d43e-4169-8b4e-940289894809\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-4sxvh" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.794014 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbq6x\" (UniqueName: \"kubernetes.io/projected/17ddd27a-66d1-4d80-abc7-80fde501fa8d-kube-api-access-wbq6x\") pod \"keystone-operator-controller-manager-7454b96578-lsd4k\" (UID: \"17ddd27a-66d1-4d80-abc7-80fde501fa8d\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.794056 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwf5f\" (UniqueName: \"kubernetes.io/projected/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-kube-api-access-qwf5f\") pod \"community-operators-nhwqm\" (UID: \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\") " pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.794652 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-utilities\") pod \"community-operators-nhwqm\" (UID: \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\") " pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.795007 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-catalog-content\") pod \"community-operators-nhwqm\" (UID: \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\") " pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:11 crc kubenswrapper[4719]: E1124 09:08:11.795084 4719 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 09:08:11 crc kubenswrapper[4719]: E1124 09:08:11.795124 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-cert podName:08979ac6-d1d0-4ef7-8996-5b02e8e8dae6 nodeName:}" failed. No retries permitted until 2025-11-24 09:08:12.29510988 +0000 UTC m=+868.626383132 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-cert") pod "infra-operator-controller-manager-6dd8864d7c-fhb77" (UID: "08979ac6-d1d0-4ef7-8996-5b02e8e8dae6") : secret "infra-operator-webhook-server-cert" not found Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.808138 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-tjjkt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.813085 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-4sxvh"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.817975 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.827469 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-lz2r8"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.828463 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-lz2r8" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.834513 4719 generic.go:334] "Generic (PLEG): container finished" podID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" containerID="35fbc1624f4f0bc2630c9deabf5dbaa557a25bd91233b7fa4d38ae0c5b72df09" exitCode=0 Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.834559 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jvrw" event={"ID":"b66c736b-2b05-4c57-9518-a76a3d9f6e13","Type":"ContainerDied","Data":"35fbc1624f4f0bc2630c9deabf5dbaa557a25bd91233b7fa4d38ae0c5b72df09"} Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.834585 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jvrw" event={"ID":"b66c736b-2b05-4c57-9518-a76a3d9f6e13","Type":"ContainerStarted","Data":"14f44af39530e3e5be3a22bd4bf97f8d4a1fe8c7583f4297b27a0adf4e2a197f"} Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.841424 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sp8j\" (UniqueName: \"kubernetes.io/projected/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-kube-api-access-9sp8j\") pod \"infra-operator-controller-manager-6dd8864d7c-fhb77\" (UID: \"08979ac6-d1d0-4ef7-8996-5b02e8e8dae6\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.848672 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-g5pcc" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.854119 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-j22wh" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.882468 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-c9h59" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.890429 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9drtn\" (UniqueName: \"kubernetes.io/projected/231d0c7b-d43e-4169-8b4e-940289894809-kube-api-access-9drtn\") pod \"ironic-operator-controller-manager-99b499f4-4sxvh\" (UID: \"231d0c7b-d43e-4169-8b4e-940289894809\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-4sxvh" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.893572 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.894667 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.896103 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbq6x\" (UniqueName: \"kubernetes.io/projected/17ddd27a-66d1-4d80-abc7-80fde501fa8d-kube-api-access-wbq6x\") pod \"keystone-operator-controller-manager-7454b96578-lsd4k\" (UID: \"17ddd27a-66d1-4d80-abc7-80fde501fa8d\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.896233 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbkst\" (UniqueName: \"kubernetes.io/projected/23502fbc-6d87-4ca2-80b3-d5af1e94205e-kube-api-access-zbkst\") pod \"manila-operator-controller-manager-58f887965d-lz2r8\" (UID: \"23502fbc-6d87-4ca2-80b3-d5af1e94205e\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-lz2r8" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.896992 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-lz2r8"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.934200 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85"] Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.943816 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwf5f\" (UniqueName: \"kubernetes.io/projected/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-kube-api-access-qwf5f\") pod \"community-operators-nhwqm\" (UID: \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\") " pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.949325 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-2kq9v" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.952891 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:11 crc kubenswrapper[4719]: I1124 09:08:11.975888 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbq6x\" (UniqueName: \"kubernetes.io/projected/17ddd27a-66d1-4d80-abc7-80fde501fa8d-kube-api-access-wbq6x\") pod \"keystone-operator-controller-manager-7454b96578-lsd4k\" (UID: \"17ddd27a-66d1-4d80-abc7-80fde501fa8d\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.019860 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-4sxvh" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.044222 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbkst\" (UniqueName: \"kubernetes.io/projected/23502fbc-6d87-4ca2-80b3-d5af1e94205e-kube-api-access-zbkst\") pod \"manila-operator-controller-manager-58f887965d-lz2r8\" (UID: \"23502fbc-6d87-4ca2-80b3-d5af1e94205e\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-lz2r8" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.044291 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmfpf\" (UniqueName: \"kubernetes.io/projected/a0a59a11-1bf3-4ff8-8496-9414bc0ae549-kube-api-access-qmfpf\") pod \"mariadb-operator-controller-manager-54b5986bb8-r2r85\" (UID: \"a0a59a11-1bf3-4ff8-8496-9414bc0ae549\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.086044 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-rnvl8"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.087173 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-rnvl8" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.090496 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbkst\" (UniqueName: \"kubernetes.io/projected/23502fbc-6d87-4ca2-80b3-d5af1e94205e-kube-api-access-zbkst\") pod \"manila-operator-controller-manager-58f887965d-lz2r8\" (UID: \"23502fbc-6d87-4ca2-80b3-d5af1e94205e\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-lz2r8" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.097448 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-z2gkj" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.101915 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.107450 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.122967 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.124104 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.126996 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xlxm5" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.133707 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ttb89" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.142164 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.147292 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8vnh\" (UniqueName: \"kubernetes.io/projected/30241c11-005e-4410-ad1a-71d6c5c0910f-kube-api-access-l8vnh\") pod \"neutron-operator-controller-manager-78bd47f458-lthw6\" (UID: \"30241c11-005e-4410-ad1a-71d6c5c0910f\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.147367 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqvxm\" (UniqueName: \"kubernetes.io/projected/1d3d3b38-b3f5-49cc-aa73-40fb03afd3ce-kube-api-access-fqvxm\") pod \"octavia-operator-controller-manager-54cfbf4c7d-rnvl8\" (UID: \"1d3d3b38-b3f5-49cc-aa73-40fb03afd3ce\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-rnvl8" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.147392 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llztm\" (UniqueName: \"kubernetes.io/projected/070e32a3-4fa9-4ab4-9e55-d76c0c87db3c-kube-api-access-llztm\") pod \"nova-operator-controller-manager-cfbb9c588-plrvj\" (UID: \"070e32a3-4fa9-4ab4-9e55-d76c0c87db3c\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.147420 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmfpf\" (UniqueName: \"kubernetes.io/projected/a0a59a11-1bf3-4ff8-8496-9414bc0ae549-kube-api-access-qmfpf\") pod \"mariadb-operator-controller-manager-54b5986bb8-r2r85\" (UID: \"a0a59a11-1bf3-4ff8-8496-9414bc0ae549\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.162302 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.165302 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.175209 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-rnvl8"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.210301 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.211409 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.215251 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.233600 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sjllr" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.236683 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-gqnbl"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.240915 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-gqnbl" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.242520 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-lz2r8" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.245209 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-rr4tk" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.246507 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmfpf\" (UniqueName: \"kubernetes.io/projected/a0a59a11-1bf3-4ff8-8496-9414bc0ae549-kube-api-access-qmfpf\") pod \"mariadb-operator-controller-manager-54b5986bb8-r2r85\" (UID: \"a0a59a11-1bf3-4ff8-8496-9414bc0ae549\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.248440 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8vnh\" (UniqueName: \"kubernetes.io/projected/30241c11-005e-4410-ad1a-71d6c5c0910f-kube-api-access-l8vnh\") pod \"neutron-operator-controller-manager-78bd47f458-lthw6\" (UID: \"30241c11-005e-4410-ad1a-71d6c5c0910f\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.248511 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqvxm\" (UniqueName: \"kubernetes.io/projected/1d3d3b38-b3f5-49cc-aa73-40fb03afd3ce-kube-api-access-fqvxm\") pod \"octavia-operator-controller-manager-54cfbf4c7d-rnvl8\" (UID: \"1d3d3b38-b3f5-49cc-aa73-40fb03afd3ce\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-rnvl8" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.248539 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llztm\" (UniqueName: \"kubernetes.io/projected/070e32a3-4fa9-4ab4-9e55-d76c0c87db3c-kube-api-access-llztm\") pod \"nova-operator-controller-manager-cfbb9c588-plrvj\" (UID: \"070e32a3-4fa9-4ab4-9e55-d76c0c87db3c\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.253469 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.277203 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-gqnbl"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.296277 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.297543 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.298085 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.312295 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4grjn" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.319590 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8vnh\" (UniqueName: \"kubernetes.io/projected/30241c11-005e-4410-ad1a-71d6c5c0910f-kube-api-access-l8vnh\") pod \"neutron-operator-controller-manager-78bd47f458-lthw6\" (UID: \"30241c11-005e-4410-ad1a-71d6c5c0910f\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.330729 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqvxm\" (UniqueName: \"kubernetes.io/projected/1d3d3b38-b3f5-49cc-aa73-40fb03afd3ce-kube-api-access-fqvxm\") pod \"octavia-operator-controller-manager-54cfbf4c7d-rnvl8\" (UID: \"1d3d3b38-b3f5-49cc-aa73-40fb03afd3ce\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-rnvl8" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.350319 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwp7z\" (UniqueName: \"kubernetes.io/projected/643149e5-3960-4912-a497-c0cb9c0e722f-kube-api-access-gwp7z\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-lf45p\" (UID: \"643149e5-3960-4912-a497-c0cb9c0e722f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.350411 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/643149e5-3960-4912-a497-c0cb9c0e722f-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-lf45p\" (UID: \"643149e5-3960-4912-a497-c0cb9c0e722f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.350453 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pngd8\" (UniqueName: \"kubernetes.io/projected/c4688244-99a9-4a75-8501-b1062f24b517-kube-api-access-pngd8\") pod \"ovn-operator-controller-manager-54fc5f65b7-gqnbl\" (UID: \"c4688244-99a9-4a75-8501-b1062f24b517\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-gqnbl" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.350488 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-fhb77\" (UID: \"08979ac6-d1d0-4ef7-8996-5b02e8e8dae6\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" Nov 24 09:08:12 crc kubenswrapper[4719]: E1124 09:08:12.350606 4719 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 09:08:12 crc kubenswrapper[4719]: E1124 09:08:12.350649 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-cert podName:08979ac6-d1d0-4ef7-8996-5b02e8e8dae6 nodeName:}" failed. No retries permitted until 2025-11-24 09:08:13.350636791 +0000 UTC m=+869.681910043 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-cert") pod "infra-operator-controller-manager-6dd8864d7c-fhb77" (UID: "08979ac6-d1d0-4ef7-8996-5b02e8e8dae6") : secret "infra-operator-webhook-server-cert" not found Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.353503 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.354725 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llztm\" (UniqueName: \"kubernetes.io/projected/070e32a3-4fa9-4ab4-9e55-d76c0c87db3c-kube-api-access-llztm\") pod \"nova-operator-controller-manager-cfbb9c588-plrvj\" (UID: \"070e32a3-4fa9-4ab4-9e55-d76c0c87db3c\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.357720 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.358828 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.370652 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4kpvs" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.441295 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.443383 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-rnvl8" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.444998 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.454660 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pngd8\" (UniqueName: \"kubernetes.io/projected/c4688244-99a9-4a75-8501-b1062f24b517-kube-api-access-pngd8\") pod \"ovn-operator-controller-manager-54fc5f65b7-gqnbl\" (UID: \"c4688244-99a9-4a75-8501-b1062f24b517\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-gqnbl" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.454706 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdtmg\" (UniqueName: \"kubernetes.io/projected/a951b65e-e9bd-43bc-9fa0-673642653e4c-kube-api-access-pdtmg\") pod \"placement-operator-controller-manager-5b797b8dff-d4vvj\" (UID: \"a951b65e-e9bd-43bc-9fa0-673642653e4c\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.454775 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwp7z\" (UniqueName: \"kubernetes.io/projected/643149e5-3960-4912-a497-c0cb9c0e722f-kube-api-access-gwp7z\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-lf45p\" (UID: \"643149e5-3960-4912-a497-c0cb9c0e722f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.454835 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/643149e5-3960-4912-a497-c0cb9c0e722f-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-lf45p\" (UID: \"643149e5-3960-4912-a497-c0cb9c0e722f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" Nov 24 09:08:12 crc kubenswrapper[4719]: E1124 09:08:12.454941 4719 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 09:08:12 crc kubenswrapper[4719]: E1124 09:08:12.454980 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/643149e5-3960-4912-a497-c0cb9c0e722f-cert podName:643149e5-3960-4912-a497-c0cb9c0e722f nodeName:}" failed. No retries permitted until 2025-11-24 09:08:12.954968586 +0000 UTC m=+869.286241838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/643149e5-3960-4912-a497-c0cb9c0e722f-cert") pod "openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" (UID: "643149e5-3960-4912-a497-c0cb9c0e722f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.459408 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.469873 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-pr4fv" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.470068 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.478288 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.499876 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwp7z\" (UniqueName: \"kubernetes.io/projected/643149e5-3960-4912-a497-c0cb9c0e722f-kube-api-access-gwp7z\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-lf45p\" (UID: \"643149e5-3960-4912-a497-c0cb9c0e722f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.501973 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-bks8t"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.502979 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bks8t" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.503781 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pngd8\" (UniqueName: \"kubernetes.io/projected/c4688244-99a9-4a75-8501-b1062f24b517-kube-api-access-pngd8\") pod \"ovn-operator-controller-manager-54fc5f65b7-gqnbl\" (UID: \"c4688244-99a9-4a75-8501-b1062f24b517\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-gqnbl" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.513369 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.526321 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-sv8pd" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.555785 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdtmg\" (UniqueName: \"kubernetes.io/projected/a951b65e-e9bd-43bc-9fa0-673642653e4c-kube-api-access-pdtmg\") pod \"placement-operator-controller-manager-5b797b8dff-d4vvj\" (UID: \"a951b65e-e9bd-43bc-9fa0-673642653e4c\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.555866 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fvvs\" (UniqueName: \"kubernetes.io/projected/3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b-kube-api-access-9fvvs\") pod \"swift-operator-controller-manager-d656998f4-tlsj6\" (UID: \"3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.555886 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8dp9\" (UniqueName: \"kubernetes.io/projected/714fe5a8-a778-4366-8823-868dd1210515-kube-api-access-h8dp9\") pod \"telemetry-operator-controller-manager-6d4bf84b58-m828t\" (UID: \"714fe5a8-a778-4366-8823-868dd1210515\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.597492 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdtmg\" (UniqueName: \"kubernetes.io/projected/a951b65e-e9bd-43bc-9fa0-673642653e4c-kube-api-access-pdtmg\") pod \"placement-operator-controller-manager-5b797b8dff-d4vvj\" (UID: \"a951b65e-e9bd-43bc-9fa0-673642653e4c\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.631309 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-bks8t"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.631614 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.632731 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4"] Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.632879 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.649001 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7d4mh" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.671320 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5lbk\" (UniqueName: \"kubernetes.io/projected/7cfebe98-a194-4c28-861f-a80f9f9f22de-kube-api-access-g5lbk\") pod \"test-operator-controller-manager-b4c496f69-bks8t\" (UID: \"7cfebe98-a194-4c28-861f-a80f9f9f22de\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-bks8t" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.671762 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fvvs\" (UniqueName: \"kubernetes.io/projected/3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b-kube-api-access-9fvvs\") pod \"swift-operator-controller-manager-d656998f4-tlsj6\" (UID: \"3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.671891 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8dp9\" (UniqueName: \"kubernetes.io/projected/714fe5a8-a778-4366-8823-868dd1210515-kube-api-access-h8dp9\") pod \"telemetry-operator-controller-manager-6d4bf84b58-m828t\" (UID: \"714fe5a8-a778-4366-8823-868dd1210515\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.741981 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-gqnbl" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.769327 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.801663 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5lbk\" (UniqueName: \"kubernetes.io/projected/7cfebe98-a194-4c28-861f-a80f9f9f22de-kube-api-access-g5lbk\") pod \"test-operator-controller-manager-b4c496f69-bks8t\" (UID: \"7cfebe98-a194-4c28-861f-a80f9f9f22de\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-bks8t" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.801806 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8m62\" (UniqueName: \"kubernetes.io/projected/d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc-kube-api-access-m8m62\") pod \"watcher-operator-controller-manager-8c6448b9f-br6f4\" (UID: \"d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.838031 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8dp9\" (UniqueName: \"kubernetes.io/projected/714fe5a8-a778-4366-8823-868dd1210515-kube-api-access-h8dp9\") pod \"telemetry-operator-controller-manager-6d4bf84b58-m828t\" (UID: \"714fe5a8-a778-4366-8823-868dd1210515\") " pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.842791 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fvvs\" (UniqueName: \"kubernetes.io/projected/3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b-kube-api-access-9fvvs\") pod \"swift-operator-controller-manager-d656998f4-tlsj6\" (UID: \"3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.872808 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5lbk\" (UniqueName: \"kubernetes.io/projected/7cfebe98-a194-4c28-861f-a80f9f9f22de-kube-api-access-g5lbk\") pod \"test-operator-controller-manager-b4c496f69-bks8t\" (UID: \"7cfebe98-a194-4c28-861f-a80f9f9f22de\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-bks8t" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.959704 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/643149e5-3960-4912-a497-c0cb9c0e722f-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-lf45p\" (UID: \"643149e5-3960-4912-a497-c0cb9c0e722f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.959807 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8m62\" (UniqueName: \"kubernetes.io/projected/d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc-kube-api-access-m8m62\") pod \"watcher-operator-controller-manager-8c6448b9f-br6f4\" (UID: \"d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" Nov 24 09:08:12 crc kubenswrapper[4719]: E1124 09:08:12.960343 4719 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 09:08:12 crc kubenswrapper[4719]: E1124 09:08:12.960393 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/643149e5-3960-4912-a497-c0cb9c0e722f-cert podName:643149e5-3960-4912-a497-c0cb9c0e722f nodeName:}" failed. No retries permitted until 2025-11-24 09:08:13.960377544 +0000 UTC m=+870.291650796 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/643149e5-3960-4912-a497-c0cb9c0e722f-cert") pod "openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" (UID: "643149e5-3960-4912-a497-c0cb9c0e722f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 09:08:12 crc kubenswrapper[4719]: I1124 09:08:12.988170 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.035408 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx"] Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.037905 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.041323 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8m62\" (UniqueName: \"kubernetes.io/projected/d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc-kube-api-access-m8m62\") pod \"watcher-operator-controller-manager-8c6448b9f-br6f4\" (UID: \"d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.049815 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-vsb6g" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.067896 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.070813 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qshf7\" (UniqueName: \"kubernetes.io/projected/37253c68-54fd-490c-9486-f2a4f2ffe834-kube-api-access-qshf7\") pod \"openstack-operator-controller-manager-5f88c7d9f9-n97nx\" (UID: \"37253c68-54fd-490c-9486-f2a4f2ffe834\") " pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.070874 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bks8t" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.070920 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37253c68-54fd-490c-9486-f2a4f2ffe834-cert\") pod \"openstack-operator-controller-manager-5f88c7d9f9-n97nx\" (UID: \"37253c68-54fd-490c-9486-f2a4f2ffe834\") " pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.084526 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.119755 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.128600 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx"] Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.160227 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj"] Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.163020 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.169717 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-rqf75" Nov 24 09:08:13 crc kubenswrapper[4719]: E1124 09:08:13.182193 4719 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 09:08:13 crc kubenswrapper[4719]: E1124 09:08:13.182433 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37253c68-54fd-490c-9486-f2a4f2ffe834-cert podName:37253c68-54fd-490c-9486-f2a4f2ffe834 nodeName:}" failed. No retries permitted until 2025-11-24 09:08:13.682418972 +0000 UTC m=+870.013692214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37253c68-54fd-490c-9486-f2a4f2ffe834-cert") pod "openstack-operator-controller-manager-5f88c7d9f9-n97nx" (UID: "37253c68-54fd-490c-9486-f2a4f2ffe834") : secret "webhook-server-cert" not found Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.186676 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37253c68-54fd-490c-9486-f2a4f2ffe834-cert\") pod \"openstack-operator-controller-manager-5f88c7d9f9-n97nx\" (UID: \"37253c68-54fd-490c-9486-f2a4f2ffe834\") " pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.186950 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qshf7\" (UniqueName: \"kubernetes.io/projected/37253c68-54fd-490c-9486-f2a4f2ffe834-kube-api-access-qshf7\") pod \"openstack-operator-controller-manager-5f88c7d9f9-n97nx\" (UID: \"37253c68-54fd-490c-9486-f2a4f2ffe834\") " pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" Nov 24 09:08:13 crc kubenswrapper[4719]: W1124 09:08:13.209476 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2d8f034_9b1e_4b62_9c3a_ffc0c0379ad1.slice/crio-85c1e194b27f09711e00e8ca65380b94576787e30cf1a67b85f12c6730007610 WatchSource:0}: Error finding container 85c1e194b27f09711e00e8ca65380b94576787e30cf1a67b85f12c6730007610: Status 404 returned error can't find the container with id 85c1e194b27f09711e00e8ca65380b94576787e30cf1a67b85f12c6730007610 Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.214941 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-sf5qt"] Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.229084 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qshf7\" (UniqueName: \"kubernetes.io/projected/37253c68-54fd-490c-9486-f2a4f2ffe834-kube-api-access-qshf7\") pod \"openstack-operator-controller-manager-5f88c7d9f9-n97nx\" (UID: \"37253c68-54fd-490c-9486-f2a4f2ffe834\") " pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.283226 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj"] Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.289751 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwq5r\" (UniqueName: \"kubernetes.io/projected/33185bd6-40f2-4fb4-83b0-dd469f48598f-kube-api-access-pwq5r\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj\" (UID: \"33185bd6-40f2-4fb4-83b0-dd469f48598f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.298353 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-6hhz5"] Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.391797 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwq5r\" (UniqueName: \"kubernetes.io/projected/33185bd6-40f2-4fb4-83b0-dd469f48598f-kube-api-access-pwq5r\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj\" (UID: \"33185bd6-40f2-4fb4-83b0-dd469f48598f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.391851 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-fhb77\" (UID: \"08979ac6-d1d0-4ef7-8996-5b02e8e8dae6\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" Nov 24 09:08:13 crc kubenswrapper[4719]: E1124 09:08:13.392081 4719 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 09:08:13 crc kubenswrapper[4719]: E1124 09:08:13.392756 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-cert podName:08979ac6-d1d0-4ef7-8996-5b02e8e8dae6 nodeName:}" failed. No retries permitted until 2025-11-24 09:08:15.392740229 +0000 UTC m=+871.724013481 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-cert") pod "infra-operator-controller-manager-6dd8864d7c-fhb77" (UID: "08979ac6-d1d0-4ef7-8996-5b02e8e8dae6") : secret "infra-operator-webhook-server-cert" not found Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.444685 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwq5r\" (UniqueName: \"kubernetes.io/projected/33185bd6-40f2-4fb4-83b0-dd469f48598f-kube-api-access-pwq5r\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj\" (UID: \"33185bd6-40f2-4fb4-83b0-dd469f48598f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.515671 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj" Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.701324 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-tjjkt"] Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.703578 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37253c68-54fd-490c-9486-f2a4f2ffe834-cert\") pod \"openstack-operator-controller-manager-5f88c7d9f9-n97nx\" (UID: \"37253c68-54fd-490c-9486-f2a4f2ffe834\") " pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" Nov 24 09:08:13 crc kubenswrapper[4719]: E1124 09:08:13.703798 4719 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 09:08:13 crc kubenswrapper[4719]: E1124 09:08:13.703844 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37253c68-54fd-490c-9486-f2a4f2ffe834-cert podName:37253c68-54fd-490c-9486-f2a4f2ffe834 nodeName:}" failed. No retries permitted until 2025-11-24 09:08:14.703830561 +0000 UTC m=+871.035103813 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37253c68-54fd-490c-9486-f2a4f2ffe834-cert") pod "openstack-operator-controller-manager-5f88c7d9f9-n97nx" (UID: "37253c68-54fd-490c-9486-f2a4f2ffe834") : secret "webhook-server-cert" not found Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.946569 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-4sxvh"] Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.954112 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-sf5qt" event={"ID":"064a4ed4-46e3-4daf-8a9d-21c8475ba687","Type":"ContainerStarted","Data":"1bfcca81565555387c9b6fb95b8e511cc18800838fa8d9ee4bc90710968604b7"} Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.972788 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-tjjkt" event={"ID":"9d35d376-e7fb-41da-bf47-efd2e5f3ea57","Type":"ContainerStarted","Data":"8213e2d85ed416de34032d346b89f9792ae9da0f3686f15ec3f89253df4312a2"} Nov 24 09:08:13 crc kubenswrapper[4719]: I1124 09:08:13.989253 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-6hhz5" event={"ID":"a2d8f034-9b1e-4b62-9c3a-ffc0c0379ad1","Type":"ContainerStarted","Data":"85c1e194b27f09711e00e8ca65380b94576787e30cf1a67b85f12c6730007610"} Nov 24 09:08:14 crc kubenswrapper[4719]: I1124 09:08:14.022638 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/643149e5-3960-4912-a497-c0cb9c0e722f-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-lf45p\" (UID: \"643149e5-3960-4912-a497-c0cb9c0e722f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" Nov 24 09:08:14 crc kubenswrapper[4719]: E1124 09:08:14.022845 4719 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 09:08:14 crc kubenswrapper[4719]: E1124 09:08:14.022899 4719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/643149e5-3960-4912-a497-c0cb9c0e722f-cert podName:643149e5-3960-4912-a497-c0cb9c0e722f nodeName:}" failed. No retries permitted until 2025-11-24 09:08:16.022883218 +0000 UTC m=+872.354156480 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/643149e5-3960-4912-a497-c0cb9c0e722f-cert") pod "openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" (UID: "643149e5-3960-4912-a497-c0cb9c0e722f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 09:08:14 crc kubenswrapper[4719]: I1124 09:08:14.748666 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt"] Nov 24 09:08:14 crc kubenswrapper[4719]: I1124 09:08:14.764256 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37253c68-54fd-490c-9486-f2a4f2ffe834-cert\") pod \"openstack-operator-controller-manager-5f88c7d9f9-n97nx\" (UID: \"37253c68-54fd-490c-9486-f2a4f2ffe834\") " pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" Nov 24 09:08:14 crc kubenswrapper[4719]: I1124 09:08:14.776550 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37253c68-54fd-490c-9486-f2a4f2ffe834-cert\") pod \"openstack-operator-controller-manager-5f88c7d9f9-n97nx\" (UID: \"37253c68-54fd-490c-9486-f2a4f2ffe834\") " pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" Nov 24 09:08:14 crc kubenswrapper[4719]: I1124 09:08:14.791808 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-lz2r8"] Nov 24 09:08:14 crc kubenswrapper[4719]: I1124 09:08:14.854758 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85"] Nov 24 09:08:14 crc kubenswrapper[4719]: I1124 09:08:14.918087 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nhwqm"] Nov 24 09:08:14 crc kubenswrapper[4719]: I1124 09:08:14.959274 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-j22wh"] Nov 24 09:08:14 crc kubenswrapper[4719]: I1124 09:08:14.973444 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.029668 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85" event={"ID":"a0a59a11-1bf3-4ff8-8496-9414bc0ae549","Type":"ContainerStarted","Data":"5bcbc880c810ee7d137c6133e1df9130b871171ccfe3dc2a006e28dc2e924602"} Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.035205 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k"] Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.038228 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-lz2r8" event={"ID":"23502fbc-6d87-4ca2-80b3-d5af1e94205e","Type":"ContainerStarted","Data":"026cb6c3544878d664d461c4d9fce469388dfa220f15ba8c940be4caa1a60659"} Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.043096 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt" event={"ID":"5a2058d2-1589-484e-a5a1-de7e31af1a63","Type":"ContainerStarted","Data":"fe394820055d13cb0da20bb686af6000ff6716fa34da3ec478770a5182442e0c"} Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.050180 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-c9h59"] Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.056090 4719 generic.go:334] "Generic (PLEG): container finished" podID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" containerID="0aecdebdbd767905444ca151bbae3728aad9336093d435d2125ff7f72b525031" exitCode=0 Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.056144 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jvrw" event={"ID":"b66c736b-2b05-4c57-9518-a76a3d9f6e13","Type":"ContainerDied","Data":"0aecdebdbd767905444ca151bbae3728aad9336093d435d2125ff7f72b525031"} Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.057821 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj"] Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.059148 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-4sxvh" event={"ID":"231d0c7b-d43e-4169-8b4e-940289894809","Type":"ContainerStarted","Data":"fe64a3b6ae55a2c97fdf190fdd074263a95efb82aadf1324c2ea6c57632f9fb5"} Nov 24 09:08:15 crc kubenswrapper[4719]: W1124 09:08:15.100956 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17ddd27a_66d1_4d80_abc7_80fde501fa8d.slice/crio-eb3b62a1ffacf5a217e0cdcfa67ea35e96023a544bffd9169cfa64d119ee06ec WatchSource:0}: Error finding container eb3b62a1ffacf5a217e0cdcfa67ea35e96023a544bffd9169cfa64d119ee06ec: Status 404 returned error can't find the container with id eb3b62a1ffacf5a217e0cdcfa67ea35e96023a544bffd9169cfa64d119ee06ec Nov 24 09:08:15 crc kubenswrapper[4719]: W1124 09:08:15.108709 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dce0610_7470_47d2_ae74_ca7fccb82b1f.slice/crio-456bf69e009b4805ffbef69934fc2477e10570e200042bdbc1fb5d6d637bae89 WatchSource:0}: Error finding container 456bf69e009b4805ffbef69934fc2477e10570e200042bdbc1fb5d6d637bae89: Status 404 returned error can't find the container with id 456bf69e009b4805ffbef69934fc2477e10570e200042bdbc1fb5d6d637bae89 Nov 24 09:08:15 crc kubenswrapper[4719]: W1124 09:08:15.120346 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod070e32a3_4fa9_4ab4_9e55_d76c0c87db3c.slice/crio-23612dcc8da3faafb16a5f27044d2d4c3d731207b9773f4f2fdfa81e48a3e935 WatchSource:0}: Error finding container 23612dcc8da3faafb16a5f27044d2d4c3d731207b9773f4f2fdfa81e48a3e935: Status 404 returned error can't find the container with id 23612dcc8da3faafb16a5f27044d2d4c3d731207b9773f4f2fdfa81e48a3e935 Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.137155 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj"] Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.205900 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-rnvl8"] Nov 24 09:08:15 crc kubenswrapper[4719]: W1124 09:08:15.216497 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda951b65e_e9bd_43bc_9fa0_673642653e4c.slice/crio-77a1d518139f17ea7cb85b7b9d4384da29ba701425defa1285f96f617ea40e16 WatchSource:0}: Error finding container 77a1d518139f17ea7cb85b7b9d4384da29ba701425defa1285f96f617ea40e16: Status 404 returned error can't find the container with id 77a1d518139f17ea7cb85b7b9d4384da29ba701425defa1285f96f617ea40e16 Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.226912 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6"] Nov 24 09:08:15 crc kubenswrapper[4719]: W1124 09:08:15.229257 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4688244_99a9_4a75_8501_b1062f24b517.slice/crio-d9f3587fc08ba6f57386a11f21a491c65cf6c67d4ab2bac19cb2c18dfe592d68 WatchSource:0}: Error finding container d9f3587fc08ba6f57386a11f21a491c65cf6c67d4ab2bac19cb2c18dfe592d68: Status 404 returned error can't find the container with id d9f3587fc08ba6f57386a11f21a491c65cf6c67d4ab2bac19cb2c18dfe592d68 Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.242208 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-gqnbl"] Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.267926 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t"] Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.291643 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6"] Nov 24 09:08:15 crc kubenswrapper[4719]: E1124 09:08:15.327902 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9fvvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d656998f4-tlsj6_openstack-operators(3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 09:08:15 crc kubenswrapper[4719]: E1124 09:08:15.351543 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h8dp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6d4bf84b58-m828t_openstack-operators(714fe5a8-a778-4366-8823-868dd1210515): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.428628 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-bks8t"] Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.456187 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4"] Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.491551 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-fhb77\" (UID: \"08979ac6-d1d0-4ef7-8996-5b02e8e8dae6\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.510051 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj"] Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.551855 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/08979ac6-d1d0-4ef7-8996-5b02e8e8dae6-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-fhb77\" (UID: \"08979ac6-d1d0-4ef7-8996-5b02e8e8dae6\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" Nov 24 09:08:15 crc kubenswrapper[4719]: E1124 09:08:15.561356 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pwq5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj_openstack-operators(33185bd6-40f2-4fb4-83b0-dd469f48598f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 09:08:15 crc kubenswrapper[4719]: E1124 09:08:15.561513 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m8m62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-8c6448b9f-br6f4_openstack-operators(d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 09:08:15 crc kubenswrapper[4719]: E1124 09:08:15.566814 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj" podUID="33185bd6-40f2-4fb4-83b0-dd469f48598f" Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.603381 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" Nov 24 09:08:15 crc kubenswrapper[4719]: E1124 09:08:15.815967 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6" podUID="3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b" Nov 24 09:08:15 crc kubenswrapper[4719]: E1124 09:08:15.855547 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" podUID="714fe5a8-a778-4366-8823-868dd1210515" Nov 24 09:08:15 crc kubenswrapper[4719]: I1124 09:08:15.926025 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx"] Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.075484 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6" event={"ID":"3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b","Type":"ContainerStarted","Data":"0d40ec0d03d999d28fa373f69238ec1a9005fd58841cef1f10f23e1b3b642f86"} Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.075520 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6" event={"ID":"3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b","Type":"ContainerStarted","Data":"fd8291093d343fde26cafbf98e83bf4c13e83fbcf5a2cefcff9d6da18c7845b7"} Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.077458 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj" event={"ID":"33185bd6-40f2-4fb4-83b0-dd469f48598f","Type":"ContainerStarted","Data":"d8613b4c2885b81e79ec0013d96c340cdab55d65f0d14a41e705ff6cd7bf49f2"} Nov 24 09:08:16 crc kubenswrapper[4719]: E1124 09:08:16.077914 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6" podUID="3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b" Nov 24 09:08:16 crc kubenswrapper[4719]: E1124 09:08:16.087401 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj" podUID="33185bd6-40f2-4fb4-83b0-dd469f48598f" Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.090009 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" event={"ID":"37253c68-54fd-490c-9486-f2a4f2ffe834","Type":"ContainerStarted","Data":"9169b8469226e9eb07c2ef74f645147c89c1006fcb31453e168455a9357d34b9"} Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.121686 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/643149e5-3960-4912-a497-c0cb9c0e722f-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-lf45p\" (UID: \"643149e5-3960-4912-a497-c0cb9c0e722f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.134875 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k" event={"ID":"17ddd27a-66d1-4d80-abc7-80fde501fa8d","Type":"ContainerStarted","Data":"eb3b62a1ffacf5a217e0cdcfa67ea35e96023a544bffd9169cfa64d119ee06ec"} Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.135588 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/643149e5-3960-4912-a497-c0cb9c0e722f-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-lf45p\" (UID: \"643149e5-3960-4912-a497-c0cb9c0e722f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.136880 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj" event={"ID":"a951b65e-e9bd-43bc-9fa0-673642653e4c","Type":"ContainerStarted","Data":"77a1d518139f17ea7cb85b7b9d4384da29ba701425defa1285f96f617ea40e16"} Nov 24 09:08:16 crc kubenswrapper[4719]: E1124 09:08:16.139630 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" podUID="d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc" Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.146778 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj" event={"ID":"070e32a3-4fa9-4ab4-9e55-d76c0c87db3c","Type":"ContainerStarted","Data":"23612dcc8da3faafb16a5f27044d2d4c3d731207b9773f4f2fdfa81e48a3e935"} Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.153884 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-j22wh" event={"ID":"9d835ba0-d338-45db-b417-7087d4cced01","Type":"ContainerStarted","Data":"d0b49c4ddf382f9b8de9861cec16f95848feda0714fa0986a46c8fe2f94c4755"} Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.159848 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" event={"ID":"d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc","Type":"ContainerStarted","Data":"b2e23a173315db34e98e274ff242f9380e3cecc705bc8394be51651410711174"} Nov 24 09:08:16 crc kubenswrapper[4719]: E1124 09:08:16.184260 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" podUID="d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc" Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.200814 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-rnvl8" event={"ID":"1d3d3b38-b3f5-49cc-aa73-40fb03afd3ce","Type":"ContainerStarted","Data":"6f81aa3daf40ff615165d371a49e193a91e12100d7c80714a3bc088fd89ca23e"} Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.231798 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" event={"ID":"714fe5a8-a778-4366-8823-868dd1210515","Type":"ContainerStarted","Data":"2985d9e4d41c6d36b1530a1dfe0b75a427b47704ddc96eafc724a167684f7a2e"} Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.231842 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" event={"ID":"714fe5a8-a778-4366-8823-868dd1210515","Type":"ContainerStarted","Data":"b23c94d1e386a073ccadc049ce1d350290ea2f9a82d071e8d307b88cd3fb0639"} Nov 24 09:08:16 crc kubenswrapper[4719]: E1124 09:08:16.234212 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" podUID="714fe5a8-a778-4366-8823-868dd1210515" Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.240976 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.255735 4719 generic.go:334] "Generic (PLEG): container finished" podID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" containerID="ce2afeb452114ec5580e129739f1a58569bd48808c368012082bd9a3d62472f9" exitCode=0 Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.255825 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwqm" event={"ID":"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78","Type":"ContainerDied","Data":"ce2afeb452114ec5580e129739f1a58569bd48808c368012082bd9a3d62472f9"} Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.255857 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwqm" event={"ID":"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78","Type":"ContainerStarted","Data":"67c09edb85233eece01772da8f63f895eda2d061c4377e84e56213d65a8a0bd4"} Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.286698 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-gqnbl" event={"ID":"c4688244-99a9-4a75-8501-b1062f24b517","Type":"ContainerStarted","Data":"d9f3587fc08ba6f57386a11f21a491c65cf6c67d4ab2bac19cb2c18dfe592d68"} Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.288124 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bks8t" event={"ID":"7cfebe98-a194-4c28-861f-a80f9f9f22de","Type":"ContainerStarted","Data":"6a69b7335ec72cade1243bf85e261996a6c84f9321a3cbb464ee38910c61af9d"} Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.289448 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6" event={"ID":"30241c11-005e-4410-ad1a-71d6c5c0910f","Type":"ContainerStarted","Data":"6b4ec524a4a156fe5448dcc88fdec75f3e5e5ff16e35de76bc61408653d5fbd0"} Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.301840 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-c9h59" event={"ID":"5dce0610-7470-47d2-ae74-ca7fccb82b1f","Type":"ContainerStarted","Data":"456bf69e009b4805ffbef69934fc2477e10570e200042bdbc1fb5d6d637bae89"} Nov 24 09:08:16 crc kubenswrapper[4719]: I1124 09:08:16.514877 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77"] Nov 24 09:08:16 crc kubenswrapper[4719]: W1124 09:08:16.532086 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08979ac6_d1d0_4ef7_8996_5b02e8e8dae6.slice/crio-6511ea82108a154fa933386d6b6c5dea478019885ef3317ea2ca2c021cb24d3d WatchSource:0}: Error finding container 6511ea82108a154fa933386d6b6c5dea478019885ef3317ea2ca2c021cb24d3d: Status 404 returned error can't find the container with id 6511ea82108a154fa933386d6b6c5dea478019885ef3317ea2ca2c021cb24d3d Nov 24 09:08:17 crc kubenswrapper[4719]: I1124 09:08:17.313819 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" event={"ID":"08979ac6-d1d0-4ef7-8996-5b02e8e8dae6","Type":"ContainerStarted","Data":"6511ea82108a154fa933386d6b6c5dea478019885ef3317ea2ca2c021cb24d3d"} Nov 24 09:08:17 crc kubenswrapper[4719]: I1124 09:08:17.327810 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" event={"ID":"d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc","Type":"ContainerStarted","Data":"29a6aa221e2030982134616995c1a2ac47c439173af547c7640dffc1745b040c"} Nov 24 09:08:17 crc kubenswrapper[4719]: E1124 09:08:17.341313 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" podUID="d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc" Nov 24 09:08:17 crc kubenswrapper[4719]: I1124 09:08:17.367137 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jvrw" event={"ID":"b66c736b-2b05-4c57-9518-a76a3d9f6e13","Type":"ContainerStarted","Data":"d7da2bbb77ec91f012708d56e0b5f22699e01247cc7c428d36883db8293520a2"} Nov 24 09:08:17 crc kubenswrapper[4719]: I1124 09:08:17.430289 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" event={"ID":"37253c68-54fd-490c-9486-f2a4f2ffe834","Type":"ContainerStarted","Data":"240e3fd34b9bd3b6f11a97a1a5103be20408c18ce98b3d4ca82b4545aba100d8"} Nov 24 09:08:17 crc kubenswrapper[4719]: I1124 09:08:17.430322 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" event={"ID":"37253c68-54fd-490c-9486-f2a4f2ffe834","Type":"ContainerStarted","Data":"b129c6f096cefcbae87cf038b9f72db54a8c6cc77f159ef73dd164bccf054b9f"} Nov 24 09:08:17 crc kubenswrapper[4719]: I1124 09:08:17.430336 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" Nov 24 09:08:17 crc kubenswrapper[4719]: E1124 09:08:17.447969 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" podUID="714fe5a8-a778-4366-8823-868dd1210515" Nov 24 09:08:17 crc kubenswrapper[4719]: E1124 09:08:17.448230 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj" podUID="33185bd6-40f2-4fb4-83b0-dd469f48598f" Nov 24 09:08:17 crc kubenswrapper[4719]: E1124 09:08:17.448325 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6" podUID="3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b" Nov 24 09:08:17 crc kubenswrapper[4719]: I1124 09:08:17.452742 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p"] Nov 24 09:08:17 crc kubenswrapper[4719]: I1124 09:08:17.573932 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8jvrw" podStartSLOduration=3.362656891 podStartE2EDuration="7.573913921s" podCreationTimestamp="2025-11-24 09:08:10 +0000 UTC" firstStartedPulling="2025-11-24 09:08:11.866860175 +0000 UTC m=+868.198133417" lastFinishedPulling="2025-11-24 09:08:16.078117195 +0000 UTC m=+872.409390447" observedRunningTime="2025-11-24 09:08:17.573381896 +0000 UTC m=+873.904655148" watchObservedRunningTime="2025-11-24 09:08:17.573913921 +0000 UTC m=+873.905187173" Nov 24 09:08:17 crc kubenswrapper[4719]: I1124 09:08:17.860899 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" podStartSLOduration=5.860884352 podStartE2EDuration="5.860884352s" podCreationTimestamp="2025-11-24 09:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:08:17.859157683 +0000 UTC m=+874.190430955" watchObservedRunningTime="2025-11-24 09:08:17.860884352 +0000 UTC m=+874.192157604" Nov 24 09:08:18 crc kubenswrapper[4719]: I1124 09:08:18.459378 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" event={"ID":"643149e5-3960-4912-a497-c0cb9c0e722f","Type":"ContainerStarted","Data":"3dcc717ceab385b26c583414b52ce0c88a6b9b652b9fac0431c13bb8a948aaa5"} Nov 24 09:08:18 crc kubenswrapper[4719]: I1124 09:08:18.506054 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwqm" event={"ID":"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78","Type":"ContainerStarted","Data":"f74deeb4a8a1e74bd024929b408c5a1afbe916f01b77c7980a134cd3be75291e"} Nov 24 09:08:18 crc kubenswrapper[4719]: E1124 09:08:18.518864 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" podUID="d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc" Nov 24 09:08:19 crc kubenswrapper[4719]: I1124 09:08:19.543937 4719 generic.go:334] "Generic (PLEG): container finished" podID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" containerID="f74deeb4a8a1e74bd024929b408c5a1afbe916f01b77c7980a134cd3be75291e" exitCode=0 Nov 24 09:08:19 crc kubenswrapper[4719]: I1124 09:08:19.543984 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwqm" event={"ID":"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78","Type":"ContainerDied","Data":"f74deeb4a8a1e74bd024929b408c5a1afbe916f01b77c7980a134cd3be75291e"} Nov 24 09:08:20 crc kubenswrapper[4719]: I1124 09:08:20.634175 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:20 crc kubenswrapper[4719]: I1124 09:08:20.634449 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:21 crc kubenswrapper[4719]: I1124 09:08:21.607183 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwqm" event={"ID":"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78","Type":"ContainerStarted","Data":"5ddc9ae8445bdd54cb45ecb957ab41622b90acbd3f4e48cc97d486987b67d85c"} Nov 24 09:08:21 crc kubenswrapper[4719]: I1124 09:08:21.632677 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nhwqm" podStartSLOduration=6.825076051 podStartE2EDuration="10.632662276s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:16.289660167 +0000 UTC m=+872.620933419" lastFinishedPulling="2025-11-24 09:08:20.097246392 +0000 UTC m=+876.428519644" observedRunningTime="2025-11-24 09:08:21.631327448 +0000 UTC m=+877.962600730" watchObservedRunningTime="2025-11-24 09:08:21.632662276 +0000 UTC m=+877.963935528" Nov 24 09:08:21 crc kubenswrapper[4719]: I1124 09:08:21.755945 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8jvrw" podUID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" containerName="registry-server" probeResult="failure" output=< Nov 24 09:08:21 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:08:21 crc kubenswrapper[4719]: > Nov 24 09:08:21 crc kubenswrapper[4719]: I1124 09:08:21.955372 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:21 crc kubenswrapper[4719]: I1124 09:08:21.955417 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:23 crc kubenswrapper[4719]: I1124 09:08:23.068391 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nhwqm" podUID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" containerName="registry-server" probeResult="failure" output=< Nov 24 09:08:23 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:08:23 crc kubenswrapper[4719]: > Nov 24 09:08:24 crc kubenswrapper[4719]: I1124 09:08:24.981546 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5f88c7d9f9-n97nx" Nov 24 09:08:31 crc kubenswrapper[4719]: I1124 09:08:31.594634 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8jvrw" podUID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" containerName="registry-server" probeResult="failure" output=< Nov 24 09:08:31 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:08:31 crc kubenswrapper[4719]: > Nov 24 09:08:32 crc kubenswrapper[4719]: E1124 09:08:32.801855 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96" Nov 24 09:08:32 crc kubenswrapper[4719]: E1124 09:08:32.802367 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zff5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-56f54d6746-xkfjt_openstack-operators(5a2058d2-1589-484e-a5a1-de7e31af1a63): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:08:33 crc kubenswrapper[4719]: I1124 09:08:33.000238 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nhwqm" podUID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" containerName="registry-server" probeResult="failure" output=< Nov 24 09:08:33 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:08:33 crc kubenswrapper[4719]: > Nov 24 09:08:34 crc kubenswrapper[4719]: E1124 09:08:34.145768 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7" Nov 24 09:08:34 crc kubenswrapper[4719]: E1124 09:08:34.145943 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-llztm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-cfbb9c588-plrvj_openstack-operators(070e32a3-4fa9-4ab4-9e55-d76c0c87db3c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:08:35 crc kubenswrapper[4719]: E1124 09:08:35.226027 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c" Nov 24 09:08:35 crc kubenswrapper[4719]: E1124 09:08:35.226323 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pdtmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b797b8dff-d4vvj_openstack-operators(a951b65e-e9bd-43bc-9fa0-673642653e4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:08:38 crc kubenswrapper[4719]: E1124 09:08:38.350767 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04" Nov 24 09:08:38 crc kubenswrapper[4719]: E1124 09:08:38.351499 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qmfpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-54b5986bb8-r2r85_openstack-operators(a0a59a11-1bf3-4ff8-8496-9414bc0ae549): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:08:40 crc kubenswrapper[4719]: E1124 09:08:40.031459 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6" Nov 24 09:08:40 crc kubenswrapper[4719]: E1124 09:08:40.031652 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l8vnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78bd47f458-lthw6_openstack-operators(30241c11-005e-4410-ad1a-71d6c5c0910f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:08:40 crc kubenswrapper[4719]: I1124 09:08:40.569202 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:40 crc kubenswrapper[4719]: I1124 09:08:40.638258 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:40 crc kubenswrapper[4719]: E1124 09:08:40.751366 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a" Nov 24 09:08:40 crc kubenswrapper[4719]: E1124 09:08:40.751630 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wbq6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7454b96578-lsd4k_openstack-operators(17ddd27a-66d1-4d80-abc7-80fde501fa8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:08:40 crc kubenswrapper[4719]: I1124 09:08:40.802484 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jvrw"] Nov 24 09:08:41 crc kubenswrapper[4719]: I1124 09:08:41.756487 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8jvrw" podUID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" containerName="registry-server" containerID="cri-o://d7da2bbb77ec91f012708d56e0b5f22699e01247cc7c428d36883db8293520a2" gracePeriod=2 Nov 24 09:08:41 crc kubenswrapper[4719]: I1124 09:08:41.993936 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:42 crc kubenswrapper[4719]: I1124 09:08:42.041008 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:43 crc kubenswrapper[4719]: I1124 09:08:43.199862 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nhwqm"] Nov 24 09:08:43 crc kubenswrapper[4719]: I1124 09:08:43.770295 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nhwqm" podUID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" containerName="registry-server" containerID="cri-o://5ddc9ae8445bdd54cb45ecb957ab41622b90acbd3f4e48cc97d486987b67d85c" gracePeriod=2 Nov 24 09:08:44 crc kubenswrapper[4719]: E1124 09:08:44.561159 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894" Nov 24 09:08:44 crc kubenswrapper[4719]: E1124 09:08:44.561379 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9sp8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-6dd8864d7c-fhb77_openstack-operators(08979ac6-d1d0-4ef7-8996-5b02e8e8dae6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:08:44 crc kubenswrapper[4719]: I1124 09:08:44.780560 4719 generic.go:334] "Generic (PLEG): container finished" podID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" containerID="5ddc9ae8445bdd54cb45ecb957ab41622b90acbd3f4e48cc97d486987b67d85c" exitCode=0 Nov 24 09:08:44 crc kubenswrapper[4719]: I1124 09:08:44.780662 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwqm" event={"ID":"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78","Type":"ContainerDied","Data":"5ddc9ae8445bdd54cb45ecb957ab41622b90acbd3f4e48cc97d486987b67d85c"} Nov 24 09:08:44 crc kubenswrapper[4719]: I1124 09:08:44.783508 4719 generic.go:334] "Generic (PLEG): container finished" podID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" containerID="d7da2bbb77ec91f012708d56e0b5f22699e01247cc7c428d36883db8293520a2" exitCode=0 Nov 24 09:08:44 crc kubenswrapper[4719]: I1124 09:08:44.783547 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jvrw" event={"ID":"b66c736b-2b05-4c57-9518-a76a3d9f6e13","Type":"ContainerDied","Data":"d7da2bbb77ec91f012708d56e0b5f22699e01247cc7c428d36883db8293520a2"} Nov 24 09:08:45 crc kubenswrapper[4719]: E1124 09:08:45.818565 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd" Nov 24 09:08:45 crc kubenswrapper[4719]: E1124 09:08:45.819069 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwp7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-8c7444f48-lf45p_openstack-operators(643149e5-3960-4912-a497-c0cb9c0e722f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:08:50 crc kubenswrapper[4719]: E1124 09:08:50.248889 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f" Nov 24 09:08:50 crc kubenswrapper[4719]: E1124 09:08:50.249595 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h8dp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6d4bf84b58-m828t_openstack-operators(714fe5a8-a778-4366-8823-868dd1210515): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:08:50 crc kubenswrapper[4719]: E1124 09:08:50.251716 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" podUID="714fe5a8-a778-4366-8823-868dd1210515" Nov 24 09:08:50 crc kubenswrapper[4719]: E1124 09:08:50.527547 4719 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d7da2bbb77ec91f012708d56e0b5f22699e01247cc7c428d36883db8293520a2 is running failed: container process not found" containerID="d7da2bbb77ec91f012708d56e0b5f22699e01247cc7c428d36883db8293520a2" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 09:08:50 crc kubenswrapper[4719]: E1124 09:08:50.528545 4719 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d7da2bbb77ec91f012708d56e0b5f22699e01247cc7c428d36883db8293520a2 is running failed: container process not found" containerID="d7da2bbb77ec91f012708d56e0b5f22699e01247cc7c428d36883db8293520a2" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 09:08:50 crc kubenswrapper[4719]: E1124 09:08:50.529090 4719 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d7da2bbb77ec91f012708d56e0b5f22699e01247cc7c428d36883db8293520a2 is running failed: container process not found" containerID="d7da2bbb77ec91f012708d56e0b5f22699e01247cc7c428d36883db8293520a2" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 09:08:50 crc kubenswrapper[4719]: E1124 09:08:50.529197 4719 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d7da2bbb77ec91f012708d56e0b5f22699e01247cc7c428d36883db8293520a2 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-8jvrw" podUID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" containerName="registry-server" Nov 24 09:08:51 crc kubenswrapper[4719]: E1124 09:08:51.955598 4719 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc9ae8445bdd54cb45ecb957ab41622b90acbd3f4e48cc97d486987b67d85c is running failed: container process not found" containerID="5ddc9ae8445bdd54cb45ecb957ab41622b90acbd3f4e48cc97d486987b67d85c" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 09:08:51 crc kubenswrapper[4719]: E1124 09:08:51.955973 4719 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc9ae8445bdd54cb45ecb957ab41622b90acbd3f4e48cc97d486987b67d85c is running failed: container process not found" containerID="5ddc9ae8445bdd54cb45ecb957ab41622b90acbd3f4e48cc97d486987b67d85c" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 09:08:51 crc kubenswrapper[4719]: E1124 09:08:51.957129 4719 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc9ae8445bdd54cb45ecb957ab41622b90acbd3f4e48cc97d486987b67d85c is running failed: container process not found" containerID="5ddc9ae8445bdd54cb45ecb957ab41622b90acbd3f4e48cc97d486987b67d85c" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 09:08:51 crc kubenswrapper[4719]: E1124 09:08:51.957195 4719 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc9ae8445bdd54cb45ecb957ab41622b90acbd3f4e48cc97d486987b67d85c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-nhwqm" podUID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" containerName="registry-server" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.131967 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.136278 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.237974 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpqjv\" (UniqueName: \"kubernetes.io/projected/b66c736b-2b05-4c57-9518-a76a3d9f6e13-kube-api-access-wpqjv\") pod \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\" (UID: \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\") " Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.238030 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-utilities\") pod \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\" (UID: \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\") " Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.238074 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66c736b-2b05-4c57-9518-a76a3d9f6e13-utilities\") pod \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\" (UID: \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\") " Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.238133 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66c736b-2b05-4c57-9518-a76a3d9f6e13-catalog-content\") pod \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\" (UID: \"b66c736b-2b05-4c57-9518-a76a3d9f6e13\") " Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.238179 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwf5f\" (UniqueName: \"kubernetes.io/projected/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-kube-api-access-qwf5f\") pod \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\" (UID: \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\") " Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.238209 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-catalog-content\") pod \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\" (UID: \"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78\") " Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.238757 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-utilities" (OuterVolumeSpecName: "utilities") pod "8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" (UID: "8029fd38-dc7e-4dd9-8ee9-29446b6e4a78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.239380 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66c736b-2b05-4c57-9518-a76a3d9f6e13-utilities" (OuterVolumeSpecName: "utilities") pod "b66c736b-2b05-4c57-9518-a76a3d9f6e13" (UID: "b66c736b-2b05-4c57-9518-a76a3d9f6e13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.243061 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-kube-api-access-qwf5f" (OuterVolumeSpecName: "kube-api-access-qwf5f") pod "8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" (UID: "8029fd38-dc7e-4dd9-8ee9-29446b6e4a78"). InnerVolumeSpecName "kube-api-access-qwf5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.245008 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b66c736b-2b05-4c57-9518-a76a3d9f6e13-kube-api-access-wpqjv" (OuterVolumeSpecName: "kube-api-access-wpqjv") pod "b66c736b-2b05-4c57-9518-a76a3d9f6e13" (UID: "b66c736b-2b05-4c57-9518-a76a3d9f6e13"). InnerVolumeSpecName "kube-api-access-wpqjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.289124 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66c736b-2b05-4c57-9518-a76a3d9f6e13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b66c736b-2b05-4c57-9518-a76a3d9f6e13" (UID: "b66c736b-2b05-4c57-9518-a76a3d9f6e13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.290537 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" (UID: "8029fd38-dc7e-4dd9-8ee9-29446b6e4a78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.339482 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwf5f\" (UniqueName: \"kubernetes.io/projected/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-kube-api-access-qwf5f\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.339546 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.339562 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpqjv\" (UniqueName: \"kubernetes.io/projected/b66c736b-2b05-4c57-9518-a76a3d9f6e13-kube-api-access-wpqjv\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.339575 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.339588 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b66c736b-2b05-4c57-9518-a76a3d9f6e13-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.339599 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b66c736b-2b05-4c57-9518-a76a3d9f6e13-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:08:53 crc kubenswrapper[4719]: E1124 09:08:53.668108 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 24 09:08:53 crc kubenswrapper[4719]: E1124 09:08:53.669161 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pwq5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj_openstack-operators(33185bd6-40f2-4fb4-83b0-dd469f48598f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:08:53 crc kubenswrapper[4719]: E1124 09:08:53.670307 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj" podUID="33185bd6-40f2-4fb4-83b0-dd469f48598f" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.846650 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nhwqm" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.846649 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwqm" event={"ID":"8029fd38-dc7e-4dd9-8ee9-29446b6e4a78","Type":"ContainerDied","Data":"67c09edb85233eece01772da8f63f895eda2d061c4377e84e56213d65a8a0bd4"} Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.846709 4719 scope.go:117] "RemoveContainer" containerID="5ddc9ae8445bdd54cb45ecb957ab41622b90acbd3f4e48cc97d486987b67d85c" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.850393 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jvrw" event={"ID":"b66c736b-2b05-4c57-9518-a76a3d9f6e13","Type":"ContainerDied","Data":"14f44af39530e3e5be3a22bd4bf97f8d4a1fe8c7583f4297b27a0adf4e2a197f"} Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.850463 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jvrw" Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.878401 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nhwqm"] Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.887579 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nhwqm"] Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.894598 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jvrw"] Nov 24 09:08:53 crc kubenswrapper[4719]: I1124 09:08:53.900778 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8jvrw"] Nov 24 09:08:54 crc kubenswrapper[4719]: I1124 09:08:54.289743 4719 scope.go:117] "RemoveContainer" containerID="f74deeb4a8a1e74bd024929b408c5a1afbe916f01b77c7980a134cd3be75291e" Nov 24 09:08:54 crc kubenswrapper[4719]: I1124 09:08:54.356285 4719 scope.go:117] "RemoveContainer" containerID="ce2afeb452114ec5580e129739f1a58569bd48808c368012082bd9a3d62472f9" Nov 24 09:08:54 crc kubenswrapper[4719]: I1124 09:08:54.404445 4719 scope.go:117] "RemoveContainer" containerID="d7da2bbb77ec91f012708d56e0b5f22699e01247cc7c428d36883db8293520a2" Nov 24 09:08:54 crc kubenswrapper[4719]: E1124 09:08:54.470021 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85" podUID="a0a59a11-1bf3-4ff8-8496-9414bc0ae549" Nov 24 09:08:54 crc kubenswrapper[4719]: I1124 09:08:54.478219 4719 scope.go:117] "RemoveContainer" containerID="0aecdebdbd767905444ca151bbae3728aad9336093d435d2125ff7f72b525031" Nov 24 09:08:54 crc kubenswrapper[4719]: E1124 09:08:54.489688 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt" podUID="5a2058d2-1589-484e-a5a1-de7e31af1a63" Nov 24 09:08:54 crc kubenswrapper[4719]: E1124 09:08:54.492840 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj" podUID="070e32a3-4fa9-4ab4-9e55-d76c0c87db3c" Nov 24 09:08:54 crc kubenswrapper[4719]: E1124 09:08:54.498975 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj" podUID="a951b65e-e9bd-43bc-9fa0-673642653e4c" Nov 24 09:08:54 crc kubenswrapper[4719]: I1124 09:08:54.534875 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" path="/var/lib/kubelet/pods/8029fd38-dc7e-4dd9-8ee9-29446b6e4a78/volumes" Nov 24 09:08:54 crc kubenswrapper[4719]: I1124 09:08:54.535670 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" path="/var/lib/kubelet/pods/b66c736b-2b05-4c57-9518-a76a3d9f6e13/volumes" Nov 24 09:08:54 crc kubenswrapper[4719]: I1124 09:08:54.535802 4719 scope.go:117] "RemoveContainer" containerID="35fbc1624f4f0bc2630c9deabf5dbaa557a25bd91233b7fa4d38ae0c5b72df09" Nov 24 09:08:54 crc kubenswrapper[4719]: E1124 09:08:54.573657 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" podUID="08979ac6-d1d0-4ef7-8996-5b02e8e8dae6" Nov 24 09:08:54 crc kubenswrapper[4719]: E1124 09:08:54.725419 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6" podUID="30241c11-005e-4410-ad1a-71d6c5c0910f" Nov 24 09:08:54 crc kubenswrapper[4719]: E1124 09:08:54.842302 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" podUID="643149e5-3960-4912-a497-c0cb9c0e722f" Nov 24 09:08:54 crc kubenswrapper[4719]: E1124 09:08:54.865268 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k" podUID="17ddd27a-66d1-4d80-abc7-80fde501fa8d" Nov 24 09:08:54 crc kubenswrapper[4719]: I1124 09:08:54.881508 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" event={"ID":"08979ac6-d1d0-4ef7-8996-5b02e8e8dae6","Type":"ContainerStarted","Data":"7c3656524c1d679282ae3650954e8b3619a7b8fad876c06e1d2209a76beda3e6"} Nov 24 09:08:54 crc kubenswrapper[4719]: I1124 09:08:54.913586 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj" event={"ID":"070e32a3-4fa9-4ab4-9e55-d76c0c87db3c","Type":"ContainerStarted","Data":"afff94bb13b45c7262b4f9b8add6c801dfdbf2317cf134c86e939186f5a39cec"} Nov 24 09:08:54 crc kubenswrapper[4719]: I1124 09:08:54.934386 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" event={"ID":"643149e5-3960-4912-a497-c0cb9c0e722f","Type":"ContainerStarted","Data":"4f79c6325e0271ef0772d6beaa573b45bcde115fa9a7333f79cd7a894fc5a21b"} Nov 24 09:08:54 crc kubenswrapper[4719]: E1124 09:08:54.966998 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" podUID="643149e5-3960-4912-a497-c0cb9c0e722f" Nov 24 09:08:55 crc kubenswrapper[4719]: I1124 09:08:55.003781 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-rnvl8" event={"ID":"1d3d3b38-b3f5-49cc-aa73-40fb03afd3ce","Type":"ContainerStarted","Data":"28907b86a76e83f334557201adcd8689c7c232b01cb357eb5a0b3bb146992576"} Nov 24 09:08:55 crc kubenswrapper[4719]: I1124 09:08:55.044312 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6" event={"ID":"30241c11-005e-4410-ad1a-71d6c5c0910f","Type":"ContainerStarted","Data":"febac5becd5cd282b6b9629419d7d646eecf71eb2cb0dd5a6bea46fca288f67e"} Nov 24 09:08:55 crc kubenswrapper[4719]: I1124 09:08:55.099183 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-6hhz5" event={"ID":"a2d8f034-9b1e-4b62-9c3a-ffc0c0379ad1","Type":"ContainerStarted","Data":"d8fb7c169b19bad397044df430e33d070f13fa36345f730ce1696f49c0020245"} Nov 24 09:08:55 crc kubenswrapper[4719]: I1124 09:08:55.141180 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85" event={"ID":"a0a59a11-1bf3-4ff8-8496-9414bc0ae549","Type":"ContainerStarted","Data":"1c662f2209a3522d43301ae98d05c8f9f3391944b1322c141481254e2cd87179"} Nov 24 09:08:55 crc kubenswrapper[4719]: I1124 09:08:55.154233 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-c9h59" event={"ID":"5dce0610-7470-47d2-ae74-ca7fccb82b1f","Type":"ContainerStarted","Data":"bed1a78cafee8f6f1500f9a083a2bee4555516012fa5a83492ca7441c558a443"} Nov 24 09:08:55 crc kubenswrapper[4719]: I1124 09:08:55.189718 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj" event={"ID":"a951b65e-e9bd-43bc-9fa0-673642653e4c","Type":"ContainerStarted","Data":"134d5983eff86ef50ff3957bb1d06294918bfd37b6d69cd493019f67dc29a117"} Nov 24 09:08:55 crc kubenswrapper[4719]: I1124 09:08:55.214629 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-tjjkt" event={"ID":"9d35d376-e7fb-41da-bf47-efd2e5f3ea57","Type":"ContainerStarted","Data":"1b164f8a5d1e020ca15df08c2c6591fb9901e54c603c44c02b434b22ffe8a5c5"} Nov 24 09:08:55 crc kubenswrapper[4719]: I1124 09:08:55.253843 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt" event={"ID":"5a2058d2-1589-484e-a5a1-de7e31af1a63","Type":"ContainerStarted","Data":"946a7dcd6f63cc1e4f94c277ae637bf668270091de6c1b00af7349c5f77e4b4d"} Nov 24 09:08:55 crc kubenswrapper[4719]: I1124 09:08:55.276645 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-4sxvh" event={"ID":"231d0c7b-d43e-4169-8b4e-940289894809","Type":"ContainerStarted","Data":"d7c85dd8e5fc44397adcb8e236707a5c637d92f5db0e470c9386d52898a75963"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.314919 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-4sxvh" event={"ID":"231d0c7b-d43e-4169-8b4e-940289894809","Type":"ContainerStarted","Data":"523aedb53861ddf60d8919713e25109e24005c61a80409d27a8ce0fda622815c"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.317671 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-4sxvh" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.351054 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-4sxvh" podStartSLOduration=9.373872984 podStartE2EDuration="45.351023179s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:14.021237471 +0000 UTC m=+870.352510723" lastFinishedPulling="2025-11-24 09:08:49.998387666 +0000 UTC m=+906.329660918" observedRunningTime="2025-11-24 09:08:56.345319814 +0000 UTC m=+912.676593076" watchObservedRunningTime="2025-11-24 09:08:56.351023179 +0000 UTC m=+912.682296431" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.371295 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-6hhz5" event={"ID":"a2d8f034-9b1e-4b62-9c3a-ffc0c0379ad1","Type":"ContainerStarted","Data":"1b9c60e6284e72e3aa9824ad5c919ca8f0da1dc88ca3d5db31dc36ef3201c340"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.371555 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-6hhz5" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.378491 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k" event={"ID":"17ddd27a-66d1-4d80-abc7-80fde501fa8d","Type":"ContainerStarted","Data":"a8a1b8192ca8a71fad5dbebc2915cfdbebe92d9a796a05501bf33d45a894c387"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.381465 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-gqnbl" event={"ID":"c4688244-99a9-4a75-8501-b1062f24b517","Type":"ContainerStarted","Data":"56df9b4fcbf5c43e42c0c9a3e5bd30d6cd6fa84b3726350c99b3a382c836aeff"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.381690 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-gqnbl" event={"ID":"c4688244-99a9-4a75-8501-b1062f24b517","Type":"ContainerStarted","Data":"9b1283522107d5e23fee698f9199043c586fb0e7d48d429fe7c2a5ae0a0d4eb5"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.382402 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-gqnbl" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.391176 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-sf5qt" event={"ID":"064a4ed4-46e3-4daf-8a9d-21c8475ba687","Type":"ContainerStarted","Data":"a0afd4111e6118f507060bf6d523261ecc32926e52872340fbe4270952aea54a"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.391249 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-sf5qt" event={"ID":"064a4ed4-46e3-4daf-8a9d-21c8475ba687","Type":"ContainerStarted","Data":"d5a799bd2a390b3625e1c87e57495aba34f1ba9a09535189d70314004f7df4a6"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.392255 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-sf5qt" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.403432 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-lz2r8" event={"ID":"23502fbc-6d87-4ca2-80b3-d5af1e94205e","Type":"ContainerStarted","Data":"843fc14fa495cfb85d58d4a457b7e041862ae8c8f065a8a1a352f6248d67068e"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.403655 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-lz2r8" event={"ID":"23502fbc-6d87-4ca2-80b3-d5af1e94205e","Type":"ContainerStarted","Data":"9515d390af4df4c7e7d9c21a17b36abe72abb2a22c5e83ab22093445ee660029"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.404326 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58f887965d-lz2r8" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.406512 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-6hhz5" podStartSLOduration=8.683363008 podStartE2EDuration="45.406496662s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:13.274079259 +0000 UTC m=+869.605352501" lastFinishedPulling="2025-11-24 09:08:49.997212903 +0000 UTC m=+906.328486155" observedRunningTime="2025-11-24 09:08:56.399030387 +0000 UTC m=+912.730303649" watchObservedRunningTime="2025-11-24 09:08:56.406496662 +0000 UTC m=+912.737769914" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.407323 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-tjjkt" event={"ID":"9d35d376-e7fb-41da-bf47-efd2e5f3ea57","Type":"ContainerStarted","Data":"e3e1d3f3ffd1240b0727dc5203a7cfb14d174d751fd58d4b82bad6a36229c01f"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.411026 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-tjjkt" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.423204 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-c9h59" event={"ID":"5dce0610-7470-47d2-ae74-ca7fccb82b1f","Type":"ContainerStarted","Data":"ce0d522c8d7975605ac1c15987697e751914c3a7723893e3ac248e907970f054"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.424852 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7969689c84-c9h59" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.441867 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-j22wh" event={"ID":"9d835ba0-d338-45db-b417-7087d4cced01","Type":"ContainerStarted","Data":"44936d7924cce9141de35659dc5cda81afec7ad8e059a6aa9905eae199d85609"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.442351 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-j22wh" event={"ID":"9d835ba0-d338-45db-b417-7087d4cced01","Type":"ContainerStarted","Data":"f71053276f89e79b8b61bd451b196f512bfe7c7914cd74452db018dfba55db19"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.442448 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-j22wh" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.444465 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-sf5qt" podStartSLOduration=8.764436283 podStartE2EDuration="45.444453609s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:13.316471796 +0000 UTC m=+869.647745048" lastFinishedPulling="2025-11-24 09:08:49.996489122 +0000 UTC m=+906.327762374" observedRunningTime="2025-11-24 09:08:56.438651951 +0000 UTC m=+912.769925213" watchObservedRunningTime="2025-11-24 09:08:56.444453609 +0000 UTC m=+912.775726861" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.456321 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6" event={"ID":"3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b","Type":"ContainerStarted","Data":"0b1835031706e207a4faeed8aa2ce7343f93dff267dcec97e05371d7f04f4796"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.457016 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.467456 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" event={"ID":"d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc","Type":"ContainerStarted","Data":"ef44ea0703d7f9e40628ba86629ba52553a2d422a81dfeb3d3e05133942ba0a4"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.468378 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.473324 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-rnvl8" event={"ID":"1d3d3b38-b3f5-49cc-aa73-40fb03afd3ce","Type":"ContainerStarted","Data":"a5c808ac6b80877d7ca5a78867c68d2df1b26454292bba33fba99cd189b3187d"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.474565 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-rnvl8" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.477207 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bks8t" event={"ID":"7cfebe98-a194-4c28-861f-a80f9f9f22de","Type":"ContainerStarted","Data":"c9adc340800b44dae252b8cff7466fc5dab926afed56baca34081fd19ed896f5"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.478200 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bks8t" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.478481 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bks8t" event={"ID":"7cfebe98-a194-4c28-861f-a80f9f9f22de","Type":"ContainerStarted","Data":"985777ece06d7b06e1ceeb535970977d516bb50d53e3f967d3417fdf4dc74aef"} Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.542267 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-gqnbl" podStartSLOduration=9.808237534 podStartE2EDuration="44.542246325s" podCreationTimestamp="2025-11-24 09:08:12 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.264622802 +0000 UTC m=+871.595896054" lastFinishedPulling="2025-11-24 09:08:49.998631553 +0000 UTC m=+906.329904845" observedRunningTime="2025-11-24 09:08:56.491107457 +0000 UTC m=+912.822380709" watchObservedRunningTime="2025-11-24 09:08:56.542246325 +0000 UTC m=+912.873519577" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.571887 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6" podStartSLOduration=8.185092222 podStartE2EDuration="44.571872171s" podCreationTimestamp="2025-11-24 09:08:12 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.327778624 +0000 UTC m=+871.659051866" lastFinishedPulling="2025-11-24 09:08:51.714558563 +0000 UTC m=+908.045831815" observedRunningTime="2025-11-24 09:08:56.570939514 +0000 UTC m=+912.902212786" watchObservedRunningTime="2025-11-24 09:08:56.571872171 +0000 UTC m=+912.903145423" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.630985 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" podStartSLOduration=5.864996673 podStartE2EDuration="44.630959618s" podCreationTimestamp="2025-11-24 09:08:12 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.56143599 +0000 UTC m=+871.892709242" lastFinishedPulling="2025-11-24 09:08:54.327398935 +0000 UTC m=+910.658672187" observedRunningTime="2025-11-24 09:08:56.626440208 +0000 UTC m=+912.957713460" watchObservedRunningTime="2025-11-24 09:08:56.630959618 +0000 UTC m=+912.962232880" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.660984 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-j22wh" podStartSLOduration=10.713718575 podStartE2EDuration="45.660970956s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.050535918 +0000 UTC m=+871.381809170" lastFinishedPulling="2025-11-24 09:08:49.997788299 +0000 UTC m=+906.329061551" observedRunningTime="2025-11-24 09:08:56.658883465 +0000 UTC m=+912.990156727" watchObservedRunningTime="2025-11-24 09:08:56.660970956 +0000 UTC m=+912.992244208" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.696603 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58f887965d-lz2r8" podStartSLOduration=10.74813384 podStartE2EDuration="45.696580985s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.05025649 +0000 UTC m=+871.381529742" lastFinishedPulling="2025-11-24 09:08:49.998703635 +0000 UTC m=+906.329976887" observedRunningTime="2025-11-24 09:08:56.696293076 +0000 UTC m=+913.027566328" watchObservedRunningTime="2025-11-24 09:08:56.696580985 +0000 UTC m=+913.027854237" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.748370 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7969689c84-c9h59" podStartSLOduration=10.93711628 podStartE2EDuration="45.748349871s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.185252761 +0000 UTC m=+871.516526013" lastFinishedPulling="2025-11-24 09:08:49.996486322 +0000 UTC m=+906.327759604" observedRunningTime="2025-11-24 09:08:56.72098321 +0000 UTC m=+913.052256492" watchObservedRunningTime="2025-11-24 09:08:56.748349871 +0000 UTC m=+913.079623133" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.748650 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-rnvl8" podStartSLOduration=10.976125331 podStartE2EDuration="45.748645499s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.224862459 +0000 UTC m=+871.556135711" lastFinishedPulling="2025-11-24 09:08:49.997382627 +0000 UTC m=+906.328655879" observedRunningTime="2025-11-24 09:08:56.747925918 +0000 UTC m=+913.079199170" watchObservedRunningTime="2025-11-24 09:08:56.748645499 +0000 UTC m=+913.079918751" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.767977 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-tjjkt" podStartSLOduration=9.651951942 podStartE2EDuration="45.767962797s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:13.882093583 +0000 UTC m=+870.213366835" lastFinishedPulling="2025-11-24 09:08:49.998104438 +0000 UTC m=+906.329377690" observedRunningTime="2025-11-24 09:08:56.767481753 +0000 UTC m=+913.098755015" watchObservedRunningTime="2025-11-24 09:08:56.767962797 +0000 UTC m=+913.099236049" Nov 24 09:08:56 crc kubenswrapper[4719]: I1124 09:08:56.784220 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bks8t" podStartSLOduration=10.255587364 podStartE2EDuration="44.784203237s" podCreationTimestamp="2025-11-24 09:08:12 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.46789685 +0000 UTC m=+871.799170102" lastFinishedPulling="2025-11-24 09:08:49.996512723 +0000 UTC m=+906.327785975" observedRunningTime="2025-11-24 09:08:56.782092706 +0000 UTC m=+913.113365968" watchObservedRunningTime="2025-11-24 09:08:56.784203237 +0000 UTC m=+913.115476489" Nov 24 09:08:57 crc kubenswrapper[4719]: I1124 09:08:57.483840 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj" event={"ID":"070e32a3-4fa9-4ab4-9e55-d76c0c87db3c","Type":"ContainerStarted","Data":"0e4470b332ab9b445dba3bb81c0e7640c8fd3ce6a2022c0630c0cf24f666f6ce"} Nov 24 09:08:57 crc kubenswrapper[4719]: I1124 09:08:57.485364 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" event={"ID":"08979ac6-d1d0-4ef7-8996-5b02e8e8dae6","Type":"ContainerStarted","Data":"3a510dd04af98998cdcacda56fd0a0bb855708b6455b735a4630be3769394d4a"} Nov 24 09:08:57 crc kubenswrapper[4719]: I1124 09:08:57.485463 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" Nov 24 09:08:57 crc kubenswrapper[4719]: I1124 09:08:57.487365 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k" event={"ID":"17ddd27a-66d1-4d80-abc7-80fde501fa8d","Type":"ContainerStarted","Data":"dac5dc81908f5ae0c99fc70a561ffa30b4869d8ee93af74862fba064e245eaed"} Nov 24 09:08:57 crc kubenswrapper[4719]: I1124 09:08:57.504990 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" podStartSLOduration=6.976551106 podStartE2EDuration="46.504969494s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:16.567334275 +0000 UTC m=+872.898607527" lastFinishedPulling="2025-11-24 09:08:56.095752673 +0000 UTC m=+912.427025915" observedRunningTime="2025-11-24 09:08:57.501458893 +0000 UTC m=+913.832732165" watchObservedRunningTime="2025-11-24 09:08:57.504969494 +0000 UTC m=+913.836242756" Nov 24 09:08:59 crc kubenswrapper[4719]: I1124 09:08:59.498869 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj" Nov 24 09:08:59 crc kubenswrapper[4719]: I1124 09:08:59.518108 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj" podStartSLOduration=7.607879709 podStartE2EDuration="48.518088695s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.189537632 +0000 UTC m=+871.520810884" lastFinishedPulling="2025-11-24 09:08:56.099746618 +0000 UTC m=+912.431019870" observedRunningTime="2025-11-24 09:08:59.516488119 +0000 UTC m=+915.847761381" watchObservedRunningTime="2025-11-24 09:08:59.518088695 +0000 UTC m=+915.849361967" Nov 24 09:09:00 crc kubenswrapper[4719]: I1124 09:09:00.506285 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k" Nov 24 09:09:00 crc kubenswrapper[4719]: I1124 09:09:00.527913 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k" podStartSLOduration=8.50535025 podStartE2EDuration="49.527885894s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.222095351 +0000 UTC m=+871.553368603" lastFinishedPulling="2025-11-24 09:08:56.244630995 +0000 UTC m=+912.575904247" observedRunningTime="2025-11-24 09:09:00.526602757 +0000 UTC m=+916.857876029" watchObservedRunningTime="2025-11-24 09:09:00.527885894 +0000 UTC m=+916.859159156" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.514068 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" event={"ID":"643149e5-3960-4912-a497-c0cb9c0e722f","Type":"ContainerStarted","Data":"22c0bf465c11a8601d0f5b0835c8bb3968591fe352a5f73287890290bfdb7f68"} Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.515635 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.521759 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj" event={"ID":"a951b65e-e9bd-43bc-9fa0-673642653e4c","Type":"ContainerStarted","Data":"e5590bf8c0d4dd1b0837adb2c40a3979c1a5c96c9573b56778dca12ab4f42216"} Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.521950 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.523645 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt" event={"ID":"5a2058d2-1589-484e-a5a1-de7e31af1a63","Type":"ContainerStarted","Data":"87cbd45ff9c22ca1b8c847014f845b45d74217aaad8216d575154906b070f57a"} Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.524305 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.526464 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6" event={"ID":"30241c11-005e-4410-ad1a-71d6c5c0910f","Type":"ContainerStarted","Data":"91a713935c726e8fd7db7f5091b678f468da719d887be2b9da0f7b01893ac1dc"} Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.526921 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.530570 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85" event={"ID":"a0a59a11-1bf3-4ff8-8496-9414bc0ae549","Type":"ContainerStarted","Data":"e56c11051c40126c9ca92355e9d55882a5761530a5a68acf38431b65906300ee"} Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.530596 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.533221 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-lsd4k" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.570861 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" podStartSLOduration=7.293658674 podStartE2EDuration="50.570841782s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:17.565413771 +0000 UTC m=+873.896687023" lastFinishedPulling="2025-11-24 09:09:00.842596869 +0000 UTC m=+917.173870131" observedRunningTime="2025-11-24 09:09:01.565734475 +0000 UTC m=+917.897007737" watchObservedRunningTime="2025-11-24 09:09:01.570841782 +0000 UTC m=+917.902115034" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.588398 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt" podStartSLOduration=4.680819768 podStartE2EDuration="50.588379689s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:14.800443278 +0000 UTC m=+871.131716530" lastFinishedPulling="2025-11-24 09:09:00.708003189 +0000 UTC m=+917.039276451" observedRunningTime="2025-11-24 09:09:01.587989128 +0000 UTC m=+917.919262380" watchObservedRunningTime="2025-11-24 09:09:01.588379689 +0000 UTC m=+917.919652941" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.601121 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85" podStartSLOduration=4.6589439200000005 podStartE2EDuration="50.601103357s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.049515329 +0000 UTC m=+871.380788581" lastFinishedPulling="2025-11-24 09:09:00.991674766 +0000 UTC m=+917.322948018" observedRunningTime="2025-11-24 09:09:01.600302273 +0000 UTC m=+917.931575545" watchObservedRunningTime="2025-11-24 09:09:01.601103357 +0000 UTC m=+917.932376609" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.623268 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj" podStartSLOduration=4.155469371 podStartE2EDuration="49.623248767s" podCreationTimestamp="2025-11-24 09:08:12 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.239802671 +0000 UTC m=+871.571075923" lastFinishedPulling="2025-11-24 09:09:00.707582067 +0000 UTC m=+917.038855319" observedRunningTime="2025-11-24 09:09:01.621467815 +0000 UTC m=+917.952741087" watchObservedRunningTime="2025-11-24 09:09:01.623248767 +0000 UTC m=+917.954522029" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.672668 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6" podStartSLOduration=5.23043364 podStartE2EDuration="50.672648464s" podCreationTimestamp="2025-11-24 09:08:11 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.264458747 +0000 UTC m=+871.595731999" lastFinishedPulling="2025-11-24 09:09:00.706673571 +0000 UTC m=+917.037946823" observedRunningTime="2025-11-24 09:09:01.668248137 +0000 UTC m=+917.999521409" watchObservedRunningTime="2025-11-24 09:09:01.672648464 +0000 UTC m=+918.003921716" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.698375 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-6hhz5" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.791796 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-sf5qt" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.816470 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-tjjkt" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.857020 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-j22wh" Nov 24 09:09:01 crc kubenswrapper[4719]: I1124 09:09:01.888370 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7969689c84-c9h59" Nov 24 09:09:02 crc kubenswrapper[4719]: I1124 09:09:02.023402 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-4sxvh" Nov 24 09:09:02 crc kubenswrapper[4719]: I1124 09:09:02.249744 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58f887965d-lz2r8" Nov 24 09:09:02 crc kubenswrapper[4719]: I1124 09:09:02.446428 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-rnvl8" Nov 24 09:09:02 crc kubenswrapper[4719]: I1124 09:09:02.517827 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-plrvj" Nov 24 09:09:02 crc kubenswrapper[4719]: I1124 09:09:02.746811 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-gqnbl" Nov 24 09:09:03 crc kubenswrapper[4719]: I1124 09:09:03.075140 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bks8t" Nov 24 09:09:03 crc kubenswrapper[4719]: I1124 09:09:03.106628 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-br6f4" Nov 24 09:09:03 crc kubenswrapper[4719]: I1124 09:09:03.133217 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d656998f4-tlsj6" Nov 24 09:09:03 crc kubenswrapper[4719]: E1124 09:09:03.523873 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" podUID="714fe5a8-a778-4366-8823-868dd1210515" Nov 24 09:09:04 crc kubenswrapper[4719]: E1124 09:09:04.525968 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj" podUID="33185bd6-40f2-4fb4-83b0-dd469f48598f" Nov 24 09:09:05 crc kubenswrapper[4719]: I1124 09:09:05.610266 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-fhb77" Nov 24 09:09:06 crc kubenswrapper[4719]: I1124 09:09:06.246778 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-lf45p" Nov 24 09:09:11 crc kubenswrapper[4719]: I1124 09:09:11.821489 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-xkfjt" Nov 24 09:09:12 crc kubenswrapper[4719]: I1124 09:09:12.302474 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-r2r85" Nov 24 09:09:12 crc kubenswrapper[4719]: I1124 09:09:12.482778 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-lthw6" Nov 24 09:09:12 crc kubenswrapper[4719]: I1124 09:09:12.797524 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-d4vvj" Nov 24 09:09:15 crc kubenswrapper[4719]: I1124 09:09:15.525831 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:09:16 crc kubenswrapper[4719]: I1124 09:09:16.629783 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" event={"ID":"714fe5a8-a778-4366-8823-868dd1210515","Type":"ContainerStarted","Data":"d6be7bfea127ee9f78271fd590004556cb30249b654edbcd5c510d3d0f7ddc66"} Nov 24 09:09:16 crc kubenswrapper[4719]: I1124 09:09:16.630292 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" Nov 24 09:09:19 crc kubenswrapper[4719]: I1124 09:09:19.540114 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" podStartSLOduration=6.414204685 podStartE2EDuration="1m7.540094389s" podCreationTimestamp="2025-11-24 09:08:12 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.351390111 +0000 UTC m=+871.682663363" lastFinishedPulling="2025-11-24 09:09:16.477279815 +0000 UTC m=+932.808553067" observedRunningTime="2025-11-24 09:09:16.650252833 +0000 UTC m=+932.981526095" watchObservedRunningTime="2025-11-24 09:09:19.540094389 +0000 UTC m=+935.871367661" Nov 24 09:09:20 crc kubenswrapper[4719]: I1124 09:09:20.660459 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj" event={"ID":"33185bd6-40f2-4fb4-83b0-dd469f48598f","Type":"ContainerStarted","Data":"c8455716a028420d399e38d97c8b4638a7fc9c4e83cdbf98e03bf8e9d7c1db9b"} Nov 24 09:09:22 crc kubenswrapper[4719]: I1124 09:09:22.990969 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" Nov 24 09:09:23 crc kubenswrapper[4719]: I1124 09:09:23.009682 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj" podStartSLOduration=5.5682535269999995 podStartE2EDuration="1m10.009665627s" podCreationTimestamp="2025-11-24 09:08:13 +0000 UTC" firstStartedPulling="2025-11-24 09:08:15.561223634 +0000 UTC m=+871.892496886" lastFinishedPulling="2025-11-24 09:09:20.002635734 +0000 UTC m=+936.333908986" observedRunningTime="2025-11-24 09:09:20.673785427 +0000 UTC m=+937.005058719" watchObservedRunningTime="2025-11-24 09:09:23.009665627 +0000 UTC m=+939.340938879" Nov 24 09:09:34 crc kubenswrapper[4719]: I1124 09:09:34.561920 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:09:34 crc kubenswrapper[4719]: I1124 09:09:34.562510 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.310230 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q9wrl"] Nov 24 09:09:37 crc kubenswrapper[4719]: E1124 09:09:37.310839 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" containerName="extract-content" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.310856 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" containerName="extract-content" Nov 24 09:09:37 crc kubenswrapper[4719]: E1124 09:09:37.310885 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" containerName="extract-content" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.310893 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" containerName="extract-content" Nov 24 09:09:37 crc kubenswrapper[4719]: E1124 09:09:37.310911 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" containerName="registry-server" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.310919 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" containerName="registry-server" Nov 24 09:09:37 crc kubenswrapper[4719]: E1124 09:09:37.310939 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" containerName="extract-utilities" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.310946 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" containerName="extract-utilities" Nov 24 09:09:37 crc kubenswrapper[4719]: E1124 09:09:37.310980 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" containerName="extract-utilities" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.310987 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" containerName="extract-utilities" Nov 24 09:09:37 crc kubenswrapper[4719]: E1124 09:09:37.311007 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" containerName="registry-server" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.311016 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" containerName="registry-server" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.311207 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="b66c736b-2b05-4c57-9518-a76a3d9f6e13" containerName="registry-server" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.311224 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="8029fd38-dc7e-4dd9-8ee9-29446b6e4a78" containerName="registry-server" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.319162 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q9wrl" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.323480 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.323637 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.325337 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-8hw5z" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.327383 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.334709 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q9wrl"] Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.429946 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6m5mh"] Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.431236 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.433634 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.437583 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6m5mh"] Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.454335 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ae47769-8e79-49ae-8edc-c34b734d3aeb-config\") pod \"dnsmasq-dns-675f4bcbfc-q9wrl\" (UID: \"5ae47769-8e79-49ae-8edc-c34b734d3aeb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q9wrl" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.454574 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mffjb\" (UniqueName: \"kubernetes.io/projected/5ae47769-8e79-49ae-8edc-c34b734d3aeb-kube-api-access-mffjb\") pod \"dnsmasq-dns-675f4bcbfc-q9wrl\" (UID: \"5ae47769-8e79-49ae-8edc-c34b734d3aeb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q9wrl" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.555938 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e271c3-6700-4f77-8558-143271d60d77-config\") pod \"dnsmasq-dns-78dd6ddcc-6m5mh\" (UID: \"a3e271c3-6700-4f77-8558-143271d60d77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.556000 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mffjb\" (UniqueName: \"kubernetes.io/projected/5ae47769-8e79-49ae-8edc-c34b734d3aeb-kube-api-access-mffjb\") pod \"dnsmasq-dns-675f4bcbfc-q9wrl\" (UID: \"5ae47769-8e79-49ae-8edc-c34b734d3aeb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q9wrl" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.556117 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krtsf\" (UniqueName: \"kubernetes.io/projected/a3e271c3-6700-4f77-8558-143271d60d77-kube-api-access-krtsf\") pod \"dnsmasq-dns-78dd6ddcc-6m5mh\" (UID: \"a3e271c3-6700-4f77-8558-143271d60d77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.556176 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e271c3-6700-4f77-8558-143271d60d77-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6m5mh\" (UID: \"a3e271c3-6700-4f77-8558-143271d60d77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.556226 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ae47769-8e79-49ae-8edc-c34b734d3aeb-config\") pod \"dnsmasq-dns-675f4bcbfc-q9wrl\" (UID: \"5ae47769-8e79-49ae-8edc-c34b734d3aeb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q9wrl" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.557219 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ae47769-8e79-49ae-8edc-c34b734d3aeb-config\") pod \"dnsmasq-dns-675f4bcbfc-q9wrl\" (UID: \"5ae47769-8e79-49ae-8edc-c34b734d3aeb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q9wrl" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.578828 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mffjb\" (UniqueName: \"kubernetes.io/projected/5ae47769-8e79-49ae-8edc-c34b734d3aeb-kube-api-access-mffjb\") pod \"dnsmasq-dns-675f4bcbfc-q9wrl\" (UID: \"5ae47769-8e79-49ae-8edc-c34b734d3aeb\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q9wrl" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.636389 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q9wrl" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.657888 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e271c3-6700-4f77-8558-143271d60d77-config\") pod \"dnsmasq-dns-78dd6ddcc-6m5mh\" (UID: \"a3e271c3-6700-4f77-8558-143271d60d77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.658012 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krtsf\" (UniqueName: \"kubernetes.io/projected/a3e271c3-6700-4f77-8558-143271d60d77-kube-api-access-krtsf\") pod \"dnsmasq-dns-78dd6ddcc-6m5mh\" (UID: \"a3e271c3-6700-4f77-8558-143271d60d77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.658072 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e271c3-6700-4f77-8558-143271d60d77-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6m5mh\" (UID: \"a3e271c3-6700-4f77-8558-143271d60d77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.660437 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e271c3-6700-4f77-8558-143271d60d77-config\") pod \"dnsmasq-dns-78dd6ddcc-6m5mh\" (UID: \"a3e271c3-6700-4f77-8558-143271d60d77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.660645 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e271c3-6700-4f77-8558-143271d60d77-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6m5mh\" (UID: \"a3e271c3-6700-4f77-8558-143271d60d77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.680860 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krtsf\" (UniqueName: \"kubernetes.io/projected/a3e271c3-6700-4f77-8558-143271d60d77-kube-api-access-krtsf\") pod \"dnsmasq-dns-78dd6ddcc-6m5mh\" (UID: \"a3e271c3-6700-4f77-8558-143271d60d77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" Nov 24 09:09:37 crc kubenswrapper[4719]: I1124 09:09:37.766175 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" Nov 24 09:09:38 crc kubenswrapper[4719]: I1124 09:09:38.099219 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q9wrl"] Nov 24 09:09:38 crc kubenswrapper[4719]: W1124 09:09:38.104064 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ae47769_8e79_49ae_8edc_c34b734d3aeb.slice/crio-6380c0d1756bc71d6dbc1f73b70d449cd9de841070dcc71368f5f0e72b3cfb8c WatchSource:0}: Error finding container 6380c0d1756bc71d6dbc1f73b70d449cd9de841070dcc71368f5f0e72b3cfb8c: Status 404 returned error can't find the container with id 6380c0d1756bc71d6dbc1f73b70d449cd9de841070dcc71368f5f0e72b3cfb8c Nov 24 09:09:38 crc kubenswrapper[4719]: I1124 09:09:38.196718 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6m5mh"] Nov 24 09:09:38 crc kubenswrapper[4719]: W1124 09:09:38.203323 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3e271c3_6700_4f77_8558_143271d60d77.slice/crio-d7ee223849f80fb946b24a17bff69f16d07b1d8b3a5f95ded3adedb54ebcfcad WatchSource:0}: Error finding container d7ee223849f80fb946b24a17bff69f16d07b1d8b3a5f95ded3adedb54ebcfcad: Status 404 returned error can't find the container with id d7ee223849f80fb946b24a17bff69f16d07b1d8b3a5f95ded3adedb54ebcfcad Nov 24 09:09:38 crc kubenswrapper[4719]: I1124 09:09:38.804606 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" event={"ID":"a3e271c3-6700-4f77-8558-143271d60d77","Type":"ContainerStarted","Data":"d7ee223849f80fb946b24a17bff69f16d07b1d8b3a5f95ded3adedb54ebcfcad"} Nov 24 09:09:38 crc kubenswrapper[4719]: I1124 09:09:38.805582 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-q9wrl" event={"ID":"5ae47769-8e79-49ae-8edc-c34b734d3aeb","Type":"ContainerStarted","Data":"6380c0d1756bc71d6dbc1f73b70d449cd9de841070dcc71368f5f0e72b3cfb8c"} Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.409586 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q9wrl"] Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.448325 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-srcvw"] Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.450230 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-srcvw" Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.480283 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-srcvw"] Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.606687 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec125b11-d40b-4268-835b-293b46fca475-dns-svc\") pod \"dnsmasq-dns-666b6646f7-srcvw\" (UID: \"ec125b11-d40b-4268-835b-293b46fca475\") " pod="openstack/dnsmasq-dns-666b6646f7-srcvw" Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.606767 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pt8t\" (UniqueName: \"kubernetes.io/projected/ec125b11-d40b-4268-835b-293b46fca475-kube-api-access-4pt8t\") pod \"dnsmasq-dns-666b6646f7-srcvw\" (UID: \"ec125b11-d40b-4268-835b-293b46fca475\") " pod="openstack/dnsmasq-dns-666b6646f7-srcvw" Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.607148 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec125b11-d40b-4268-835b-293b46fca475-config\") pod \"dnsmasq-dns-666b6646f7-srcvw\" (UID: \"ec125b11-d40b-4268-835b-293b46fca475\") " pod="openstack/dnsmasq-dns-666b6646f7-srcvw" Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.708886 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec125b11-d40b-4268-835b-293b46fca475-dns-svc\") pod \"dnsmasq-dns-666b6646f7-srcvw\" (UID: \"ec125b11-d40b-4268-835b-293b46fca475\") " pod="openstack/dnsmasq-dns-666b6646f7-srcvw" Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.708967 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pt8t\" (UniqueName: \"kubernetes.io/projected/ec125b11-d40b-4268-835b-293b46fca475-kube-api-access-4pt8t\") pod \"dnsmasq-dns-666b6646f7-srcvw\" (UID: \"ec125b11-d40b-4268-835b-293b46fca475\") " pod="openstack/dnsmasq-dns-666b6646f7-srcvw" Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.709069 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec125b11-d40b-4268-835b-293b46fca475-config\") pod \"dnsmasq-dns-666b6646f7-srcvw\" (UID: \"ec125b11-d40b-4268-835b-293b46fca475\") " pod="openstack/dnsmasq-dns-666b6646f7-srcvw" Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.710561 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec125b11-d40b-4268-835b-293b46fca475-config\") pod \"dnsmasq-dns-666b6646f7-srcvw\" (UID: \"ec125b11-d40b-4268-835b-293b46fca475\") " pod="openstack/dnsmasq-dns-666b6646f7-srcvw" Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.711678 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec125b11-d40b-4268-835b-293b46fca475-dns-svc\") pod \"dnsmasq-dns-666b6646f7-srcvw\" (UID: \"ec125b11-d40b-4268-835b-293b46fca475\") " pod="openstack/dnsmasq-dns-666b6646f7-srcvw" Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.754427 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pt8t\" (UniqueName: \"kubernetes.io/projected/ec125b11-d40b-4268-835b-293b46fca475-kube-api-access-4pt8t\") pod \"dnsmasq-dns-666b6646f7-srcvw\" (UID: \"ec125b11-d40b-4268-835b-293b46fca475\") " pod="openstack/dnsmasq-dns-666b6646f7-srcvw" Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.784199 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-srcvw" Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.938556 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6m5mh"] Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.986868 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8lw6x"] Nov 24 09:09:40 crc kubenswrapper[4719]: I1124 09:09:40.994119 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.018095 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8lw6x"] Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.117431 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c4rg\" (UniqueName: \"kubernetes.io/projected/70f5a384-410e-4e03-a5bb-af88b26f8cb8-kube-api-access-7c4rg\") pod \"dnsmasq-dns-57d769cc4f-8lw6x\" (UID: \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.117523 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f5a384-410e-4e03-a5bb-af88b26f8cb8-config\") pod \"dnsmasq-dns-57d769cc4f-8lw6x\" (UID: \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.117576 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70f5a384-410e-4e03-a5bb-af88b26f8cb8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8lw6x\" (UID: \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.218852 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f5a384-410e-4e03-a5bb-af88b26f8cb8-config\") pod \"dnsmasq-dns-57d769cc4f-8lw6x\" (UID: \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.218908 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70f5a384-410e-4e03-a5bb-af88b26f8cb8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8lw6x\" (UID: \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.218992 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c4rg\" (UniqueName: \"kubernetes.io/projected/70f5a384-410e-4e03-a5bb-af88b26f8cb8-kube-api-access-7c4rg\") pod \"dnsmasq-dns-57d769cc4f-8lw6x\" (UID: \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.223414 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70f5a384-410e-4e03-a5bb-af88b26f8cb8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8lw6x\" (UID: \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.225090 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f5a384-410e-4e03-a5bb-af88b26f8cb8-config\") pod \"dnsmasq-dns-57d769cc4f-8lw6x\" (UID: \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.243888 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c4rg\" (UniqueName: \"kubernetes.io/projected/70f5a384-410e-4e03-a5bb-af88b26f8cb8-kube-api-access-7c4rg\") pod \"dnsmasq-dns-57d769cc4f-8lw6x\" (UID: \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\") " pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.355888 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-srcvw"] Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.366021 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" Nov 24 09:09:41 crc kubenswrapper[4719]: W1124 09:09:41.374585 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec125b11_d40b_4268_835b_293b46fca475.slice/crio-c3322f4864f0c5fa3b9024be65b54a650561a91ad84d869b467b7b5941f97b7b WatchSource:0}: Error finding container c3322f4864f0c5fa3b9024be65b54a650561a91ad84d869b467b7b5941f97b7b: Status 404 returned error can't find the container with id c3322f4864f0c5fa3b9024be65b54a650561a91ad84d869b467b7b5941f97b7b Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.680874 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.682712 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.685293 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.685617 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.685853 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.690622 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.690742 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.690622 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.690635 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-t99s2" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.698463 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.829764 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.829816 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-config-data\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.829873 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.829906 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.829942 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.829964 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.829987 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv25k\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-kube-api-access-fv25k\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.830012 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.830051 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.830075 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.830106 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.852678 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-srcvw" event={"ID":"ec125b11-d40b-4268-835b-293b46fca475","Type":"ContainerStarted","Data":"c3322f4864f0c5fa3b9024be65b54a650561a91ad84d869b467b7b5941f97b7b"} Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.862818 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8lw6x"] Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.942199 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.942244 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-config-data\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.942328 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.942366 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.942408 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.942422 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.942441 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv25k\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-kube-api-access-fv25k\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.942461 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.942482 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.942501 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.942527 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.944328 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-config-data\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.946068 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.954499 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.955075 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.955328 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.955777 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.965054 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.973784 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.974267 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.980450 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.994059 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:41 crc kubenswrapper[4719]: I1124 09:09:41.995725 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv25k\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-kube-api-access-fv25k\") pod \"rabbitmq-server-0\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " pod="openstack/rabbitmq-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.019269 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.157162 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.158324 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.167945 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.168069 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.167953 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9zhq9" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.168276 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.168408 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.168424 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.168553 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.186832 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.253996 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.254073 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/957bbc3c-6b1d-403a-a49d-6bafef454a48-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.254100 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.254131 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/957bbc3c-6b1d-403a-a49d-6bafef454a48-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.254163 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.254203 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.254243 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq86r\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-kube-api-access-qq86r\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.254281 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.254305 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.254342 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.254362 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.358445 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.358499 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/957bbc3c-6b1d-403a-a49d-6bafef454a48-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.358525 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.358550 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/957bbc3c-6b1d-403a-a49d-6bafef454a48-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.358579 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.358615 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.358649 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq86r\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-kube-api-access-qq86r\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.358683 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.358706 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.358757 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.358776 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.360394 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.372306 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.373064 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.374793 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.375867 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.376987 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.382986 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.383174 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/957bbc3c-6b1d-403a-a49d-6bafef454a48-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.384626 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.388645 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/957bbc3c-6b1d-403a-a49d-6bafef454a48-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.397859 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq86r\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-kube-api-access-qq86r\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.415204 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.515634 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.773948 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.870003 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" event={"ID":"70f5a384-410e-4e03-a5bb-af88b26f8cb8","Type":"ContainerStarted","Data":"5369efc34bc0cbab5cfab5ef8f0336035ca4d90bf5b66faff2ad51a606d6d0ab"} Nov 24 09:09:42 crc kubenswrapper[4719]: I1124 09:09:42.872539 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b","Type":"ContainerStarted","Data":"7fb82ee214adef520f631e3249a024b92c3938fd053622d79fd98cabd7d70f77"} Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.200160 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:09:43 crc kubenswrapper[4719]: W1124 09:09:43.235600 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957bbc3c_6b1d_403a_a49d_6bafef454a48.slice/crio-cf5838a65504bea7c2c0f62a30752532cf09a5cfa9e0a03e94c2ddfb517c14fc WatchSource:0}: Error finding container cf5838a65504bea7c2c0f62a30752532cf09a5cfa9e0a03e94c2ddfb517c14fc: Status 404 returned error can't find the container with id cf5838a65504bea7c2c0f62a30752532cf09a5cfa9e0a03e94c2ddfb517c14fc Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.239523 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.240993 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.254583 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7k5md" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.254610 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.255275 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.255510 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.257159 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.266116 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.278970 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.279026 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.279167 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.286398 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j8sr\" (UniqueName: \"kubernetes.io/projected/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-kube-api-access-9j8sr\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.286543 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.286624 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-kolla-config\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.286733 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-config-data-default\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.286887 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.388554 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.388675 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.389905 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.389945 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.389979 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j8sr\" (UniqueName: \"kubernetes.io/projected/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-kube-api-access-9j8sr\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.389996 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.390078 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-kolla-config\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.390207 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-config-data-default\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.391581 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.391597 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.391844 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-config-data-default\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.391800 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-kolla-config\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.393525 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.417080 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.425580 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.440021 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j8sr\" (UniqueName: \"kubernetes.io/projected/0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38-kube-api-access-9j8sr\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.507817 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38\") " pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.570353 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 09:09:43 crc kubenswrapper[4719]: I1124 09:09:43.881496 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"957bbc3c-6b1d-403a-a49d-6bafef454a48","Type":"ContainerStarted","Data":"cf5838a65504bea7c2c0f62a30752532cf09a5cfa9e0a03e94c2ddfb517c14fc"} Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.089817 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.574185 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.580653 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.589221 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-64fr7" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.589408 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.589530 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.589676 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.597864 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.620568 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/98cf534d-3e13-4443-901c-0755d91b2f09-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.620635 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/98cf534d-3e13-4443-901c-0755d91b2f09-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.620684 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/98cf534d-3e13-4443-901c-0755d91b2f09-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.620717 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.620738 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/98cf534d-3e13-4443-901c-0755d91b2f09-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.620907 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98cf534d-3e13-4443-901c-0755d91b2f09-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.621019 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98cf534d-3e13-4443-901c-0755d91b2f09-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.621057 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rtlv\" (UniqueName: \"kubernetes.io/projected/98cf534d-3e13-4443-901c-0755d91b2f09-kube-api-access-7rtlv\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.730247 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98cf534d-3e13-4443-901c-0755d91b2f09-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.730296 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rtlv\" (UniqueName: \"kubernetes.io/projected/98cf534d-3e13-4443-901c-0755d91b2f09-kube-api-access-7rtlv\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.730322 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/98cf534d-3e13-4443-901c-0755d91b2f09-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.730353 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/98cf534d-3e13-4443-901c-0755d91b2f09-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.730389 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/98cf534d-3e13-4443-901c-0755d91b2f09-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.730419 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.730439 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/98cf534d-3e13-4443-901c-0755d91b2f09-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.730475 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98cf534d-3e13-4443-901c-0755d91b2f09-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.731966 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98cf534d-3e13-4443-901c-0755d91b2f09-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.732956 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/98cf534d-3e13-4443-901c-0755d91b2f09-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.733492 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/98cf534d-3e13-4443-901c-0755d91b2f09-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.733726 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.742736 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/98cf534d-3e13-4443-901c-0755d91b2f09-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.751453 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98cf534d-3e13-4443-901c-0755d91b2f09-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.753961 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/98cf534d-3e13-4443-901c-0755d91b2f09-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.780612 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rtlv\" (UniqueName: \"kubernetes.io/projected/98cf534d-3e13-4443-901c-0755d91b2f09-kube-api-access-7rtlv\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.805754 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"98cf534d-3e13-4443-901c-0755d91b2f09\") " pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.926309 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38","Type":"ContainerStarted","Data":"dd338b8755255d07eea78176355563232b7a6618c90806d5a21e8e98f691b40f"} Nov 24 09:09:44 crc kubenswrapper[4719]: I1124 09:09:44.935254 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.128184 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.129439 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.142149 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.142277 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.142391 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ndc2q" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.149449 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.249301 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/769e49a4-92ab-4c92-aebd-3c79f66a6227-kolla-config\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.249379 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/769e49a4-92ab-4c92-aebd-3c79f66a6227-memcached-tls-certs\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.249402 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/769e49a4-92ab-4c92-aebd-3c79f66a6227-combined-ca-bundle\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.254044 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r62k\" (UniqueName: \"kubernetes.io/projected/769e49a4-92ab-4c92-aebd-3c79f66a6227-kube-api-access-7r62k\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.254085 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/769e49a4-92ab-4c92-aebd-3c79f66a6227-config-data\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.355547 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/769e49a4-92ab-4c92-aebd-3c79f66a6227-memcached-tls-certs\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.355705 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/769e49a4-92ab-4c92-aebd-3c79f66a6227-combined-ca-bundle\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.355751 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r62k\" (UniqueName: \"kubernetes.io/projected/769e49a4-92ab-4c92-aebd-3c79f66a6227-kube-api-access-7r62k\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.355942 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/769e49a4-92ab-4c92-aebd-3c79f66a6227-config-data\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.356010 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/769e49a4-92ab-4c92-aebd-3c79f66a6227-kolla-config\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.357895 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/769e49a4-92ab-4c92-aebd-3c79f66a6227-config-data\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.358411 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/769e49a4-92ab-4c92-aebd-3c79f66a6227-kolla-config\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.360222 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/769e49a4-92ab-4c92-aebd-3c79f66a6227-combined-ca-bundle\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.385825 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/769e49a4-92ab-4c92-aebd-3c79f66a6227-memcached-tls-certs\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.393668 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r62k\" (UniqueName: \"kubernetes.io/projected/769e49a4-92ab-4c92-aebd-3c79f66a6227-kube-api-access-7r62k\") pod \"memcached-0\" (UID: \"769e49a4-92ab-4c92-aebd-3c79f66a6227\") " pod="openstack/memcached-0" Nov 24 09:09:45 crc kubenswrapper[4719]: I1124 09:09:45.475554 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 09:09:46 crc kubenswrapper[4719]: I1124 09:09:46.442349 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:09:46 crc kubenswrapper[4719]: I1124 09:09:46.443667 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 09:09:46 crc kubenswrapper[4719]: I1124 09:09:46.447494 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-m47kc" Nov 24 09:09:46 crc kubenswrapper[4719]: I1124 09:09:46.471687 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:09:46 crc kubenswrapper[4719]: I1124 09:09:46.594788 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm2qj\" (UniqueName: \"kubernetes.io/projected/bbdd37f7-5b28-4ecb-96ad-b2c7986016e4-kube-api-access-dm2qj\") pod \"kube-state-metrics-0\" (UID: \"bbdd37f7-5b28-4ecb-96ad-b2c7986016e4\") " pod="openstack/kube-state-metrics-0" Nov 24 09:09:46 crc kubenswrapper[4719]: I1124 09:09:46.696160 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm2qj\" (UniqueName: \"kubernetes.io/projected/bbdd37f7-5b28-4ecb-96ad-b2c7986016e4-kube-api-access-dm2qj\") pod \"kube-state-metrics-0\" (UID: \"bbdd37f7-5b28-4ecb-96ad-b2c7986016e4\") " pod="openstack/kube-state-metrics-0" Nov 24 09:09:46 crc kubenswrapper[4719]: I1124 09:09:46.727104 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm2qj\" (UniqueName: \"kubernetes.io/projected/bbdd37f7-5b28-4ecb-96ad-b2c7986016e4-kube-api-access-dm2qj\") pod \"kube-state-metrics-0\" (UID: \"bbdd37f7-5b28-4ecb-96ad-b2c7986016e4\") " pod="openstack/kube-state-metrics-0" Nov 24 09:09:46 crc kubenswrapper[4719]: I1124 09:09:46.767675 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.483512 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ccf6d"] Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.487124 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.490651 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-vjthh" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.491465 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.498330 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.514009 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ccf6d"] Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.541244 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-bk9qz"] Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.544400 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bk9qz"] Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.544501 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.667905 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/225b57e5-7f49-4b51-87db-6c790f23bf6e-ovn-controller-tls-certs\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.667953 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-var-log\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.667995 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvz6h\" (UniqueName: \"kubernetes.io/projected/225b57e5-7f49-4b51-87db-6c790f23bf6e-kube-api-access-tvz6h\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.668075 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spwdm\" (UniqueName: \"kubernetes.io/projected/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-kube-api-access-spwdm\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.668116 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-etc-ovs\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.668152 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/225b57e5-7f49-4b51-87db-6c790f23bf6e-var-run-ovn\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.668728 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/225b57e5-7f49-4b51-87db-6c790f23bf6e-var-run\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.669429 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-var-run\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.669471 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-scripts\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.669530 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/225b57e5-7f49-4b51-87db-6c790f23bf6e-scripts\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.669560 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/225b57e5-7f49-4b51-87db-6c790f23bf6e-combined-ca-bundle\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.669579 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-var-lib\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.669614 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/225b57e5-7f49-4b51-87db-6c790f23bf6e-var-log-ovn\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.770719 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-var-log\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.770763 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/225b57e5-7f49-4b51-87db-6c790f23bf6e-ovn-controller-tls-certs\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.770799 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvz6h\" (UniqueName: \"kubernetes.io/projected/225b57e5-7f49-4b51-87db-6c790f23bf6e-kube-api-access-tvz6h\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.770847 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spwdm\" (UniqueName: \"kubernetes.io/projected/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-kube-api-access-spwdm\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.770879 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-etc-ovs\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.770911 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/225b57e5-7f49-4b51-87db-6c790f23bf6e-var-run-ovn\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.770938 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/225b57e5-7f49-4b51-87db-6c790f23bf6e-var-run\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.770972 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-var-run\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.770993 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-scripts\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.771026 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/225b57e5-7f49-4b51-87db-6c790f23bf6e-scripts\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.771068 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/225b57e5-7f49-4b51-87db-6c790f23bf6e-combined-ca-bundle\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.771106 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-var-lib\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.771137 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/225b57e5-7f49-4b51-87db-6c790f23bf6e-var-log-ovn\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.771682 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/225b57e5-7f49-4b51-87db-6c790f23bf6e-var-log-ovn\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.771769 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/225b57e5-7f49-4b51-87db-6c790f23bf6e-var-run-ovn\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.771829 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/225b57e5-7f49-4b51-87db-6c790f23bf6e-var-run\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.771871 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-var-log\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.771881 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-var-run\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.774048 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-scripts\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.774214 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-var-lib\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.774344 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-etc-ovs\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.776896 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/225b57e5-7f49-4b51-87db-6c790f23bf6e-scripts\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.779006 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/225b57e5-7f49-4b51-87db-6c790f23bf6e-combined-ca-bundle\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.790585 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/225b57e5-7f49-4b51-87db-6c790f23bf6e-ovn-controller-tls-certs\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.794528 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvz6h\" (UniqueName: \"kubernetes.io/projected/225b57e5-7f49-4b51-87db-6c790f23bf6e-kube-api-access-tvz6h\") pod \"ovn-controller-ccf6d\" (UID: \"225b57e5-7f49-4b51-87db-6c790f23bf6e\") " pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.800361 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spwdm\" (UniqueName: \"kubernetes.io/projected/d36ea9cd-a7ed-463f-9ef5-58066e1446ed-kube-api-access-spwdm\") pod \"ovn-controller-ovs-bk9qz\" (UID: \"d36ea9cd-a7ed-463f-9ef5-58066e1446ed\") " pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.833579 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ccf6d" Nov 24 09:09:50 crc kubenswrapper[4719]: I1124 09:09:50.872588 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.365471 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.367327 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.371783 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.373207 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-vt2lb" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.373365 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.374075 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.375085 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.376530 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.483895 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/30c29a06-49fe-444c-befa-e10d67ac0e5e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.483950 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30c29a06-49fe-444c-befa-e10d67ac0e5e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.483986 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/30c29a06-49fe-444c-befa-e10d67ac0e5e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.484011 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30c29a06-49fe-444c-befa-e10d67ac0e5e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.484059 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhngs\" (UniqueName: \"kubernetes.io/projected/30c29a06-49fe-444c-befa-e10d67ac0e5e-kube-api-access-hhngs\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.484090 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30c29a06-49fe-444c-befa-e10d67ac0e5e-config\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.484119 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/30c29a06-49fe-444c-befa-e10d67ac0e5e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.484135 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.586689 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/30c29a06-49fe-444c-befa-e10d67ac0e5e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.586791 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.586855 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/30c29a06-49fe-444c-befa-e10d67ac0e5e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.586895 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30c29a06-49fe-444c-befa-e10d67ac0e5e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.586951 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/30c29a06-49fe-444c-befa-e10d67ac0e5e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.586983 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30c29a06-49fe-444c-befa-e10d67ac0e5e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.587023 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhngs\" (UniqueName: \"kubernetes.io/projected/30c29a06-49fe-444c-befa-e10d67ac0e5e-kube-api-access-hhngs\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.587076 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30c29a06-49fe-444c-befa-e10d67ac0e5e-config\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.587406 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/30c29a06-49fe-444c-befa-e10d67ac0e5e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.587972 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30c29a06-49fe-444c-befa-e10d67ac0e5e-config\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.588392 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.588764 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30c29a06-49fe-444c-befa-e10d67ac0e5e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.602060 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30c29a06-49fe-444c-befa-e10d67ac0e5e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.604207 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/30c29a06-49fe-444c-befa-e10d67ac0e5e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.614641 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.614793 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/30c29a06-49fe-444c-befa-e10d67ac0e5e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.618891 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhngs\" (UniqueName: \"kubernetes.io/projected/30c29a06-49fe-444c-befa-e10d67ac0e5e-kube-api-access-hhngs\") pod \"ovsdbserver-nb-0\" (UID: \"30c29a06-49fe-444c-befa-e10d67ac0e5e\") " pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:51 crc kubenswrapper[4719]: I1124 09:09:51.685129 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.392844 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.394985 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.398633 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.399458 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6h64x" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.399498 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.399637 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.407792 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.544450 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hj7l\" (UniqueName: \"kubernetes.io/projected/0be9bc93-deb3-4864-a259-dc32d2d64870-kube-api-access-2hj7l\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.544518 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0be9bc93-deb3-4864-a259-dc32d2d64870-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.544668 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be9bc93-deb3-4864-a259-dc32d2d64870-config\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.544828 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.544872 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be9bc93-deb3-4864-a259-dc32d2d64870-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.544904 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be9bc93-deb3-4864-a259-dc32d2d64870-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.545080 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0be9bc93-deb3-4864-a259-dc32d2d64870-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.545176 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be9bc93-deb3-4864-a259-dc32d2d64870-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.646744 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0be9bc93-deb3-4864-a259-dc32d2d64870-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.646805 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be9bc93-deb3-4864-a259-dc32d2d64870-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.646842 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hj7l\" (UniqueName: \"kubernetes.io/projected/0be9bc93-deb3-4864-a259-dc32d2d64870-kube-api-access-2hj7l\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.646890 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0be9bc93-deb3-4864-a259-dc32d2d64870-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.646948 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be9bc93-deb3-4864-a259-dc32d2d64870-config\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.646998 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.647865 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be9bc93-deb3-4864-a259-dc32d2d64870-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.647902 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be9bc93-deb3-4864-a259-dc32d2d64870-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.649066 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.650862 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0be9bc93-deb3-4864-a259-dc32d2d64870-config\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.651724 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0be9bc93-deb3-4864-a259-dc32d2d64870-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.651786 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0be9bc93-deb3-4864-a259-dc32d2d64870-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.655601 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be9bc93-deb3-4864-a259-dc32d2d64870-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.658261 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be9bc93-deb3-4864-a259-dc32d2d64870-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.664881 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hj7l\" (UniqueName: \"kubernetes.io/projected/0be9bc93-deb3-4864-a259-dc32d2d64870-kube-api-access-2hj7l\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.668095 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be9bc93-deb3-4864-a259-dc32d2d64870-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.669764 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0be9bc93-deb3-4864-a259-dc32d2d64870\") " pod="openstack/ovsdbserver-sb-0" Nov 24 09:09:54 crc kubenswrapper[4719]: I1124 09:09:54.738492 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 09:10:00 crc kubenswrapper[4719]: I1124 09:10:00.537090 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 09:10:03 crc kubenswrapper[4719]: W1124 09:10:03.862896 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98cf534d_3e13_4443_901c_0755d91b2f09.slice/crio-07a607bafd940a938b3b517d982abb283a636b65777ba1c02aafea7e28710b8e WatchSource:0}: Error finding container 07a607bafd940a938b3b517d982abb283a636b65777ba1c02aafea7e28710b8e: Status 404 returned error can't find the container with id 07a607bafd940a938b3b517d982abb283a636b65777ba1c02aafea7e28710b8e Nov 24 09:10:04 crc kubenswrapper[4719]: I1124 09:10:04.093946 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"98cf534d-3e13-4443-901c-0755d91b2f09","Type":"ContainerStarted","Data":"07a607bafd940a938b3b517d982abb283a636b65777ba1c02aafea7e28710b8e"} Nov 24 09:10:04 crc kubenswrapper[4719]: I1124 09:10:04.561500 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:10:04 crc kubenswrapper[4719]: I1124 09:10:04.561879 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:10:05 crc kubenswrapper[4719]: E1124 09:10:05.916245 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 09:10:05 crc kubenswrapper[4719]: E1124 09:10:05.916414 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7c4rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-8lw6x_openstack(70f5a384-410e-4e03-a5bb-af88b26f8cb8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:10:05 crc kubenswrapper[4719]: E1124 09:10:05.917594 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" podUID="70f5a384-410e-4e03-a5bb-af88b26f8cb8" Nov 24 09:10:06 crc kubenswrapper[4719]: E1124 09:10:06.036355 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 09:10:06 crc kubenswrapper[4719]: E1124 09:10:06.036549 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pt8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-srcvw_openstack(ec125b11-d40b-4268-835b-293b46fca475): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:10:06 crc kubenswrapper[4719]: E1124 09:10:06.037712 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-srcvw" podUID="ec125b11-d40b-4268-835b-293b46fca475" Nov 24 09:10:06 crc kubenswrapper[4719]: E1124 09:10:06.075198 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 09:10:06 crc kubenswrapper[4719]: E1124 09:10:06.075341 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krtsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-6m5mh_openstack(a3e271c3-6700-4f77-8558-143271d60d77): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:10:06 crc kubenswrapper[4719]: E1124 09:10:06.076500 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" podUID="a3e271c3-6700-4f77-8558-143271d60d77" Nov 24 09:10:06 crc kubenswrapper[4719]: E1124 09:10:06.136696 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-srcvw" podUID="ec125b11-d40b-4268-835b-293b46fca475" Nov 24 09:10:06 crc kubenswrapper[4719]: E1124 09:10:06.136885 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" podUID="70f5a384-410e-4e03-a5bb-af88b26f8cb8" Nov 24 09:10:06 crc kubenswrapper[4719]: E1124 09:10:06.179915 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 09:10:06 crc kubenswrapper[4719]: E1124 09:10:06.180124 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mffjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-q9wrl_openstack(5ae47769-8e79-49ae-8edc-c34b734d3aeb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:10:06 crc kubenswrapper[4719]: E1124 09:10:06.181365 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-q9wrl" podUID="5ae47769-8e79-49ae-8edc-c34b734d3aeb" Nov 24 09:10:06 crc kubenswrapper[4719]: I1124 09:10:06.778945 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" Nov 24 09:10:06 crc kubenswrapper[4719]: I1124 09:10:06.896735 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krtsf\" (UniqueName: \"kubernetes.io/projected/a3e271c3-6700-4f77-8558-143271d60d77-kube-api-access-krtsf\") pod \"a3e271c3-6700-4f77-8558-143271d60d77\" (UID: \"a3e271c3-6700-4f77-8558-143271d60d77\") " Nov 24 09:10:06 crc kubenswrapper[4719]: I1124 09:10:06.896966 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e271c3-6700-4f77-8558-143271d60d77-config\") pod \"a3e271c3-6700-4f77-8558-143271d60d77\" (UID: \"a3e271c3-6700-4f77-8558-143271d60d77\") " Nov 24 09:10:06 crc kubenswrapper[4719]: I1124 09:10:06.896992 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e271c3-6700-4f77-8558-143271d60d77-dns-svc\") pod \"a3e271c3-6700-4f77-8558-143271d60d77\" (UID: \"a3e271c3-6700-4f77-8558-143271d60d77\") " Nov 24 09:10:06 crc kubenswrapper[4719]: I1124 09:10:06.897339 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e271c3-6700-4f77-8558-143271d60d77-config" (OuterVolumeSpecName: "config") pod "a3e271c3-6700-4f77-8558-143271d60d77" (UID: "a3e271c3-6700-4f77-8558-143271d60d77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:06 crc kubenswrapper[4719]: I1124 09:10:06.897556 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3e271c3-6700-4f77-8558-143271d60d77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3e271c3-6700-4f77-8558-143271d60d77" (UID: "a3e271c3-6700-4f77-8558-143271d60d77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:06 crc kubenswrapper[4719]: I1124 09:10:06.897938 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3e271c3-6700-4f77-8558-143271d60d77-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:06 crc kubenswrapper[4719]: I1124 09:10:06.897959 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3e271c3-6700-4f77-8558-143271d60d77-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:06 crc kubenswrapper[4719]: I1124 09:10:06.910337 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3e271c3-6700-4f77-8558-143271d60d77-kube-api-access-krtsf" (OuterVolumeSpecName: "kube-api-access-krtsf") pod "a3e271c3-6700-4f77-8558-143271d60d77" (UID: "a3e271c3-6700-4f77-8558-143271d60d77"). InnerVolumeSpecName "kube-api-access-krtsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:06 crc kubenswrapper[4719]: I1124 09:10:06.941795 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ccf6d"] Nov 24 09:10:07 crc kubenswrapper[4719]: I1124 09:10:06.999747 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krtsf\" (UniqueName: \"kubernetes.io/projected/a3e271c3-6700-4f77-8558-143271d60d77-kube-api-access-krtsf\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:07 crc kubenswrapper[4719]: I1124 09:10:07.111945 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 09:10:07 crc kubenswrapper[4719]: I1124 09:10:07.117982 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:10:07 crc kubenswrapper[4719]: I1124 09:10:07.121675 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ccf6d" event={"ID":"225b57e5-7f49-4b51-87db-6c790f23bf6e","Type":"ContainerStarted","Data":"bf0c9c1e101758cb6b7ef0d3cd7cd13a9de4348ee918028961ab5ef4c507f4f8"} Nov 24 09:10:07 crc kubenswrapper[4719]: I1124 09:10:07.124506 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" event={"ID":"a3e271c3-6700-4f77-8558-143271d60d77","Type":"ContainerDied","Data":"d7ee223849f80fb946b24a17bff69f16d07b1d8b3a5f95ded3adedb54ebcfcad"} Nov 24 09:10:07 crc kubenswrapper[4719]: I1124 09:10:07.124576 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6m5mh" Nov 24 09:10:07 crc kubenswrapper[4719]: I1124 09:10:07.127374 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"98cf534d-3e13-4443-901c-0755d91b2f09","Type":"ContainerStarted","Data":"077a9c74c27871dbc16a3e3b9e99ff1a570b36ee7a24a0e211dd877151dfbe20"} Nov 24 09:10:07 crc kubenswrapper[4719]: I1124 09:10:07.130686 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38","Type":"ContainerStarted","Data":"6ef0a45e58f11bcde236a5990ad5e4bd31e24902e28e44710634c0581c142739"} Nov 24 09:10:07 crc kubenswrapper[4719]: I1124 09:10:07.214149 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6m5mh"] Nov 24 09:10:07 crc kubenswrapper[4719]: I1124 09:10:07.235639 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6m5mh"] Nov 24 09:10:07 crc kubenswrapper[4719]: I1124 09:10:07.576576 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 09:10:07 crc kubenswrapper[4719]: I1124 09:10:07.941793 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q9wrl" Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.024174 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mffjb\" (UniqueName: \"kubernetes.io/projected/5ae47769-8e79-49ae-8edc-c34b734d3aeb-kube-api-access-mffjb\") pod \"5ae47769-8e79-49ae-8edc-c34b734d3aeb\" (UID: \"5ae47769-8e79-49ae-8edc-c34b734d3aeb\") " Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.024335 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ae47769-8e79-49ae-8edc-c34b734d3aeb-config\") pod \"5ae47769-8e79-49ae-8edc-c34b734d3aeb\" (UID: \"5ae47769-8e79-49ae-8edc-c34b734d3aeb\") " Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.024988 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ae47769-8e79-49ae-8edc-c34b734d3aeb-config" (OuterVolumeSpecName: "config") pod "5ae47769-8e79-49ae-8edc-c34b734d3aeb" (UID: "5ae47769-8e79-49ae-8edc-c34b734d3aeb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.025422 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ae47769-8e79-49ae-8edc-c34b734d3aeb-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.032248 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ae47769-8e79-49ae-8edc-c34b734d3aeb-kube-api-access-mffjb" (OuterVolumeSpecName: "kube-api-access-mffjb") pod "5ae47769-8e79-49ae-8edc-c34b734d3aeb" (UID: "5ae47769-8e79-49ae-8edc-c34b734d3aeb"). InnerVolumeSpecName "kube-api-access-mffjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.127490 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mffjb\" (UniqueName: \"kubernetes.io/projected/5ae47769-8e79-49ae-8edc-c34b734d3aeb-kube-api-access-mffjb\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.141880 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-q9wrl" event={"ID":"5ae47769-8e79-49ae-8edc-c34b734d3aeb","Type":"ContainerDied","Data":"6380c0d1756bc71d6dbc1f73b70d449cd9de841070dcc71368f5f0e72b3cfb8c"} Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.141985 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q9wrl" Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.158377 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bbdd37f7-5b28-4ecb-96ad-b2c7986016e4","Type":"ContainerStarted","Data":"7f44dbde35840cd77b4c881412f26908255e3218583fed8e1c6d2dd0c89853e2"} Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.160876 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0be9bc93-deb3-4864-a259-dc32d2d64870","Type":"ContainerStarted","Data":"c3ca55dd590e241bd96eb82b0b17511a7121ff336e98ed1f974f2557dec5a80e"} Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.166439 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"769e49a4-92ab-4c92-aebd-3c79f66a6227","Type":"ContainerStarted","Data":"2b3c4aded64a51c6c64cfdfcbe0611939b67765bb261cbcb28b6bff3ec8e0988"} Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.190331 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bk9qz"] Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.208341 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b","Type":"ContainerStarted","Data":"c1771d592dc655e20add14c4d69f4bd51d88a3a286a78df17a6a37061d930f05"} Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.211936 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"957bbc3c-6b1d-403a-a49d-6bafef454a48","Type":"ContainerStarted","Data":"4a8f0a27407ab24ce981574879de37159dee198c2d64e89bb371cc30f0da695d"} Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.272535 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q9wrl"] Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.328405 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q9wrl"] Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.449507 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.533197 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ae47769-8e79-49ae-8edc-c34b734d3aeb" path="/var/lib/kubelet/pods/5ae47769-8e79-49ae-8edc-c34b734d3aeb/volumes" Nov 24 09:10:08 crc kubenswrapper[4719]: I1124 09:10:08.533596 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3e271c3-6700-4f77-8558-143271d60d77" path="/var/lib/kubelet/pods/a3e271c3-6700-4f77-8558-143271d60d77/volumes" Nov 24 09:10:09 crc kubenswrapper[4719]: I1124 09:10:09.220312 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"30c29a06-49fe-444c-befa-e10d67ac0e5e","Type":"ContainerStarted","Data":"2c181b7a828b63a69ce2cbbdc7bd23a9a76966dab956873eb03e324b8e447375"} Nov 24 09:10:09 crc kubenswrapper[4719]: I1124 09:10:09.221545 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bk9qz" event={"ID":"d36ea9cd-a7ed-463f-9ef5-58066e1446ed","Type":"ContainerStarted","Data":"d646989338c9bca24c2b4efb53f1d4d9b223b04da8e1010e02276e4c591224cd"} Nov 24 09:10:11 crc kubenswrapper[4719]: I1124 09:10:11.243016 4719 generic.go:334] "Generic (PLEG): container finished" podID="0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38" containerID="6ef0a45e58f11bcde236a5990ad5e4bd31e24902e28e44710634c0581c142739" exitCode=0 Nov 24 09:10:11 crc kubenswrapper[4719]: I1124 09:10:11.243100 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38","Type":"ContainerDied","Data":"6ef0a45e58f11bcde236a5990ad5e4bd31e24902e28e44710634c0581c142739"} Nov 24 09:10:11 crc kubenswrapper[4719]: I1124 09:10:11.249176 4719 generic.go:334] "Generic (PLEG): container finished" podID="98cf534d-3e13-4443-901c-0755d91b2f09" containerID="077a9c74c27871dbc16a3e3b9e99ff1a570b36ee7a24a0e211dd877151dfbe20" exitCode=0 Nov 24 09:10:11 crc kubenswrapper[4719]: I1124 09:10:11.249220 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"98cf534d-3e13-4443-901c-0755d91b2f09","Type":"ContainerDied","Data":"077a9c74c27871dbc16a3e3b9e99ff1a570b36ee7a24a0e211dd877151dfbe20"} Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.276215 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ccf6d" event={"ID":"225b57e5-7f49-4b51-87db-6c790f23bf6e","Type":"ContainerStarted","Data":"730738ebf5eda5beeed8e96c59954c0e87f23b5ef8d4f544db6c1be5af58b91d"} Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.276812 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ccf6d" Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.277911 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bbdd37f7-5b28-4ecb-96ad-b2c7986016e4","Type":"ContainerStarted","Data":"8ccffb921f05247a14c20f8932c7bdebe8060884c1729911ee8f402025c1d9dd"} Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.278092 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.281112 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0be9bc93-deb3-4864-a259-dc32d2d64870","Type":"ContainerStarted","Data":"06ef898fcf10af4c1cff0b832d3439c53f152c277f7fe82d3e559919b5c1590a"} Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.283689 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bk9qz" event={"ID":"d36ea9cd-a7ed-463f-9ef5-58066e1446ed","Type":"ContainerStarted","Data":"e49f3bd9edc5c6bceda6525e789b33bedb4d4281952c4e5e175066d5997abd6d"} Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.285691 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"769e49a4-92ab-4c92-aebd-3c79f66a6227","Type":"ContainerStarted","Data":"0854e3bfa66378d0215b25d0203656181b65aae48f21cdca30978272f0d9488a"} Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.285829 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.287674 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"98cf534d-3e13-4443-901c-0755d91b2f09","Type":"ContainerStarted","Data":"52e77e3f14c0f23f85110291365e7a7e7e99f17be424a7baa31e84200d236485"} Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.290514 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"30c29a06-49fe-444c-befa-e10d67ac0e5e","Type":"ContainerStarted","Data":"8dee9c1c87e21719def3381d36e79333fa867a4ea9bbfc0fcf5311e4f6277bb9"} Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.292456 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38","Type":"ContainerStarted","Data":"c25cad208d7e4110fb6da38a7eba6749c5c2797364ffb5b4b11e7f0bcebdb52e"} Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.302007 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ccf6d" podStartSLOduration=18.160693606 podStartE2EDuration="24.301982177s" podCreationTimestamp="2025-11-24 09:09:50 +0000 UTC" firstStartedPulling="2025-11-24 09:10:07.024350651 +0000 UTC m=+983.355623903" lastFinishedPulling="2025-11-24 09:10:13.165639222 +0000 UTC m=+989.496912474" observedRunningTime="2025-11-24 09:10:14.296871359 +0000 UTC m=+990.628144631" watchObservedRunningTime="2025-11-24 09:10:14.301982177 +0000 UTC m=+990.633255429" Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.329995 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=30.854370363 podStartE2EDuration="31.329973646s" podCreationTimestamp="2025-11-24 09:09:43 +0000 UTC" firstStartedPulling="2025-11-24 09:10:06.059084679 +0000 UTC m=+982.390357931" lastFinishedPulling="2025-11-24 09:10:06.534687962 +0000 UTC m=+982.865961214" observedRunningTime="2025-11-24 09:10:14.325053993 +0000 UTC m=+990.656327255" watchObservedRunningTime="2025-11-24 09:10:14.329973646 +0000 UTC m=+990.661246898" Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.345318 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=23.826787284 podStartE2EDuration="29.345301229s" podCreationTimestamp="2025-11-24 09:09:45 +0000 UTC" firstStartedPulling="2025-11-24 09:10:07.427179621 +0000 UTC m=+983.758452873" lastFinishedPulling="2025-11-24 09:10:12.945693556 +0000 UTC m=+989.276966818" observedRunningTime="2025-11-24 09:10:14.343143006 +0000 UTC m=+990.674416278" watchObservedRunningTime="2025-11-24 09:10:14.345301229 +0000 UTC m=+990.676574481" Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.400360 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.145049464 podStartE2EDuration="28.400339739s" podCreationTimestamp="2025-11-24 09:09:46 +0000 UTC" firstStartedPulling="2025-11-24 09:10:07.427510861 +0000 UTC m=+983.758784113" lastFinishedPulling="2025-11-24 09:10:13.682801136 +0000 UTC m=+990.014074388" observedRunningTime="2025-11-24 09:10:14.394541101 +0000 UTC m=+990.725814363" watchObservedRunningTime="2025-11-24 09:10:14.400339739 +0000 UTC m=+990.731613001" Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.545728 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.452970605 podStartE2EDuration="32.54571168s" podCreationTimestamp="2025-11-24 09:09:42 +0000 UTC" firstStartedPulling="2025-11-24 09:09:44.100019657 +0000 UTC m=+960.431292909" lastFinishedPulling="2025-11-24 09:10:06.192760732 +0000 UTC m=+982.524033984" observedRunningTime="2025-11-24 09:10:14.44259384 +0000 UTC m=+990.773867112" watchObservedRunningTime="2025-11-24 09:10:14.54571168 +0000 UTC m=+990.876984932" Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.943493 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 09:10:14 crc kubenswrapper[4719]: I1124 09:10:14.943534 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 09:10:15 crc kubenswrapper[4719]: I1124 09:10:15.301298 4719 generic.go:334] "Generic (PLEG): container finished" podID="d36ea9cd-a7ed-463f-9ef5-58066e1446ed" containerID="e49f3bd9edc5c6bceda6525e789b33bedb4d4281952c4e5e175066d5997abd6d" exitCode=0 Nov 24 09:10:15 crc kubenswrapper[4719]: I1124 09:10:15.301357 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bk9qz" event={"ID":"d36ea9cd-a7ed-463f-9ef5-58066e1446ed","Type":"ContainerDied","Data":"e49f3bd9edc5c6bceda6525e789b33bedb4d4281952c4e5e175066d5997abd6d"} Nov 24 09:10:16 crc kubenswrapper[4719]: I1124 09:10:16.315102 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bk9qz" event={"ID":"d36ea9cd-a7ed-463f-9ef5-58066e1446ed","Type":"ContainerStarted","Data":"24a75ac541fcb41d59a45f830b1dcc8412856a35057ba9ec1d156a1e78e564f4"} Nov 24 09:10:16 crc kubenswrapper[4719]: I1124 09:10:16.315387 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bk9qz" event={"ID":"d36ea9cd-a7ed-463f-9ef5-58066e1446ed","Type":"ContainerStarted","Data":"d120f78f8566c0b9f1ec973fca712de77b065b1fe1d727de14be6ad5aebc2e3d"} Nov 24 09:10:16 crc kubenswrapper[4719]: I1124 09:10:16.315492 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:10:16 crc kubenswrapper[4719]: I1124 09:10:16.315507 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:10:16 crc kubenswrapper[4719]: I1124 09:10:16.334968 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-bk9qz" podStartSLOduration=21.476109969 podStartE2EDuration="26.334947202s" podCreationTimestamp="2025-11-24 09:09:50 +0000 UTC" firstStartedPulling="2025-11-24 09:10:08.305420809 +0000 UTC m=+984.636694061" lastFinishedPulling="2025-11-24 09:10:13.164258042 +0000 UTC m=+989.495531294" observedRunningTime="2025-11-24 09:10:16.334305873 +0000 UTC m=+992.665579135" watchObservedRunningTime="2025-11-24 09:10:16.334947202 +0000 UTC m=+992.666220474" Nov 24 09:10:19 crc kubenswrapper[4719]: I1124 09:10:19.342088 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0be9bc93-deb3-4864-a259-dc32d2d64870","Type":"ContainerStarted","Data":"66bc0412b2a34d50755e7f84af7549588bdada18c33e9577373b8daaa72b9781"} Nov 24 09:10:19 crc kubenswrapper[4719]: I1124 09:10:19.344474 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"30c29a06-49fe-444c-befa-e10d67ac0e5e","Type":"ContainerStarted","Data":"8a5d82f06105a9789ed6a0ca35e18bc90a28127ef8ca9f18893cd2bc3c9e1846"} Nov 24 09:10:19 crc kubenswrapper[4719]: I1124 09:10:19.346715 4719 generic.go:334] "Generic (PLEG): container finished" podID="70f5a384-410e-4e03-a5bb-af88b26f8cb8" containerID="ac619c8d8744fedf1fc601c555b00f6044ad6e6f5c4d856aa298926528e736d4" exitCode=0 Nov 24 09:10:19 crc kubenswrapper[4719]: I1124 09:10:19.346749 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" event={"ID":"70f5a384-410e-4e03-a5bb-af88b26f8cb8","Type":"ContainerDied","Data":"ac619c8d8744fedf1fc601c555b00f6044ad6e6f5c4d856aa298926528e736d4"} Nov 24 09:10:19 crc kubenswrapper[4719]: I1124 09:10:19.414816 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=15.818022062 podStartE2EDuration="26.414800158s" podCreationTimestamp="2025-11-24 09:09:53 +0000 UTC" firstStartedPulling="2025-11-24 09:10:07.615021929 +0000 UTC m=+983.946295181" lastFinishedPulling="2025-11-24 09:10:18.211800025 +0000 UTC m=+994.543073277" observedRunningTime="2025-11-24 09:10:19.388422955 +0000 UTC m=+995.719696217" watchObservedRunningTime="2025-11-24 09:10:19.414800158 +0000 UTC m=+995.746073410" Nov 24 09:10:19 crc kubenswrapper[4719]: I1124 09:10:19.416886 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.694697985 podStartE2EDuration="29.416879408s" podCreationTimestamp="2025-11-24 09:09:50 +0000 UTC" firstStartedPulling="2025-11-24 09:10:08.468882173 +0000 UTC m=+984.800155425" lastFinishedPulling="2025-11-24 09:10:18.191063596 +0000 UTC m=+994.522336848" observedRunningTime="2025-11-24 09:10:19.408560417 +0000 UTC m=+995.739833689" watchObservedRunningTime="2025-11-24 09:10:19.416879408 +0000 UTC m=+995.748152660" Nov 24 09:10:19 crc kubenswrapper[4719]: I1124 09:10:19.739420 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 09:10:20 crc kubenswrapper[4719]: I1124 09:10:20.364521 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" event={"ID":"70f5a384-410e-4e03-a5bb-af88b26f8cb8","Type":"ContainerStarted","Data":"f2eb0d47feb581c4c87f1023e9e77779690c6c658400d561b32f306803baae7f"} Nov 24 09:10:20 crc kubenswrapper[4719]: I1124 09:10:20.364989 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" Nov 24 09:10:20 crc kubenswrapper[4719]: I1124 09:10:20.386401 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" podStartSLOduration=4.079441412 podStartE2EDuration="40.386330671s" podCreationTimestamp="2025-11-24 09:09:40 +0000 UTC" firstStartedPulling="2025-11-24 09:09:41.885584308 +0000 UTC m=+958.216857560" lastFinishedPulling="2025-11-24 09:10:18.192473547 +0000 UTC m=+994.523746819" observedRunningTime="2025-11-24 09:10:20.382007796 +0000 UTC m=+996.713281068" watchObservedRunningTime="2025-11-24 09:10:20.386330671 +0000 UTC m=+996.717603923" Nov 24 09:10:20 crc kubenswrapper[4719]: I1124 09:10:20.476438 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 09:10:21 crc kubenswrapper[4719]: I1124 09:10:21.079206 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 09:10:21 crc kubenswrapper[4719]: I1124 09:10:21.153120 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 09:10:21 crc kubenswrapper[4719]: I1124 09:10:21.372001 4719 generic.go:334] "Generic (PLEG): container finished" podID="ec125b11-d40b-4268-835b-293b46fca475" containerID="cb6a5222ecaa99669d36811844c388ce8d19257a19a5ff2f24c0b4474c7e0e00" exitCode=0 Nov 24 09:10:21 crc kubenswrapper[4719]: I1124 09:10:21.372063 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-srcvw" event={"ID":"ec125b11-d40b-4268-835b-293b46fca475","Type":"ContainerDied","Data":"cb6a5222ecaa99669d36811844c388ce8d19257a19a5ff2f24c0b4474c7e0e00"} Nov 24 09:10:21 crc kubenswrapper[4719]: I1124 09:10:21.685368 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 09:10:21 crc kubenswrapper[4719]: I1124 09:10:21.685666 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 09:10:21 crc kubenswrapper[4719]: I1124 09:10:21.726054 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 09:10:21 crc kubenswrapper[4719]: I1124 09:10:21.738760 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 09:10:21 crc kubenswrapper[4719]: I1124 09:10:21.775973 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 09:10:22 crc kubenswrapper[4719]: I1124 09:10:22.380362 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-srcvw" event={"ID":"ec125b11-d40b-4268-835b-293b46fca475","Type":"ContainerStarted","Data":"22548371c6bd49c88d9127ece85281b776b80ffabe69bdd9064bafc4fa99dbfd"} Nov 24 09:10:22 crc kubenswrapper[4719]: I1124 09:10:22.398000 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-srcvw" podStartSLOduration=-9223371994.456793 podStartE2EDuration="42.3979837s" podCreationTimestamp="2025-11-24 09:09:40 +0000 UTC" firstStartedPulling="2025-11-24 09:09:41.380858174 +0000 UTC m=+957.712131416" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:22.397402913 +0000 UTC m=+998.728676185" watchObservedRunningTime="2025-11-24 09:10:22.3979837 +0000 UTC m=+998.729256952" Nov 24 09:10:22 crc kubenswrapper[4719]: I1124 09:10:22.420217 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 09:10:22 crc kubenswrapper[4719]: I1124 09:10:22.428225 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 09:10:22 crc kubenswrapper[4719]: I1124 09:10:22.759001 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8lw6x"] Nov 24 09:10:22 crc kubenswrapper[4719]: I1124 09:10:22.759264 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" podUID="70f5a384-410e-4e03-a5bb-af88b26f8cb8" containerName="dnsmasq-dns" containerID="cri-o://f2eb0d47feb581c4c87f1023e9e77779690c6c658400d561b32f306803baae7f" gracePeriod=10 Nov 24 09:10:22 crc kubenswrapper[4719]: I1124 09:10:22.859277 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-x6zhj"] Nov 24 09:10:22 crc kubenswrapper[4719]: I1124 09:10:22.860884 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:22 crc kubenswrapper[4719]: I1124 09:10:22.867208 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xdb6r"] Nov 24 09:10:22 crc kubenswrapper[4719]: I1124 09:10:22.868466 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:22 crc kubenswrapper[4719]: I1124 09:10:22.884447 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 09:10:22 crc kubenswrapper[4719]: I1124 09:10:22.884684 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 09:10:22 crc kubenswrapper[4719]: I1124 09:10:22.910783 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-x6zhj"] Nov 24 09:10:22 crc kubenswrapper[4719]: I1124 09:10:22.946679 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xdb6r"] Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.049443 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-ovs-rundir\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.049512 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phrx4\" (UniqueName: \"kubernetes.io/projected/ceda8ef7-a576-4bb4-ab82-104436789689-kube-api-access-phrx4\") pod \"dnsmasq-dns-5bf47b49b7-x6zhj\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.049554 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-ovn-rundir\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.049612 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgdhp\" (UniqueName: \"kubernetes.io/projected/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-kube-api-access-cgdhp\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.049645 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-config\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.050191 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-x6zhj\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.050225 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-config\") pod \"dnsmasq-dns-5bf47b49b7-x6zhj\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.050260 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-combined-ca-bundle\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.050285 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.050308 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-x6zhj\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.145890 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-srcvw"] Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.151736 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-config\") pod \"dnsmasq-dns-5bf47b49b7-x6zhj\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.151799 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-combined-ca-bundle\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.151821 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.151855 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-x6zhj\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.151879 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-ovs-rundir\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.151904 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phrx4\" (UniqueName: \"kubernetes.io/projected/ceda8ef7-a576-4bb4-ab82-104436789689-kube-api-access-phrx4\") pod \"dnsmasq-dns-5bf47b49b7-x6zhj\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.151934 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-ovn-rundir\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.151971 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgdhp\" (UniqueName: \"kubernetes.io/projected/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-kube-api-access-cgdhp\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.151990 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-config\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.152009 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-x6zhj\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.152906 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-x6zhj\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.153267 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-ovs-rundir\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.154126 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-config\") pod \"dnsmasq-dns-5bf47b49b7-x6zhj\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.154856 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-x6zhj\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.155243 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-ovn-rundir\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.156746 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-config\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.176911 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.177235 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-combined-ca-bundle\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.184960 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-68ff6"] Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.197811 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgdhp\" (UniqueName: \"kubernetes.io/projected/7bc3fe26-9fdd-4077-b4e1-6f9a35219a21-kube-api-access-cgdhp\") pod \"ovn-controller-metrics-xdb6r\" (UID: \"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21\") " pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.204023 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.211878 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-68ff6"] Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.213351 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phrx4\" (UniqueName: \"kubernetes.io/projected/ceda8ef7-a576-4bb4-ab82-104436789689-kube-api-access-phrx4\") pod \"dnsmasq-dns-5bf47b49b7-x6zhj\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.233472 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.239436 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.249600 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.256369 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.268292 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.268556 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.268674 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jzkd7" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.268779 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.279357 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xdb6r" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.329513 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.357406 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.357444 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.357476 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.357496 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.357628 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-dns-svc\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.357665 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc6nc\" (UniqueName: \"kubernetes.io/projected/9e84dd25-4828-43e5-80a8-25307b77944f-kube-api-access-mc6nc\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.357680 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-config\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.357702 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccdbr\" (UniqueName: \"kubernetes.io/projected/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-kube-api-access-ccdbr\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.357727 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.357745 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-scripts\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.357818 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-config\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.357872 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.439826 4719 generic.go:334] "Generic (PLEG): container finished" podID="70f5a384-410e-4e03-a5bb-af88b26f8cb8" containerID="f2eb0d47feb581c4c87f1023e9e77779690c6c658400d561b32f306803baae7f" exitCode=0 Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.441353 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" event={"ID":"70f5a384-410e-4e03-a5bb-af88b26f8cb8","Type":"ContainerDied","Data":"f2eb0d47feb581c4c87f1023e9e77779690c6c658400d561b32f306803baae7f"} Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.442188 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-srcvw" podUID="ec125b11-d40b-4268-835b-293b46fca475" containerName="dnsmasq-dns" containerID="cri-o://22548371c6bd49c88d9127ece85281b776b80ffabe69bdd9064bafc4fa99dbfd" gracePeriod=10 Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.442355 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-srcvw" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.460887 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc6nc\" (UniqueName: \"kubernetes.io/projected/9e84dd25-4828-43e5-80a8-25307b77944f-kube-api-access-mc6nc\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.460928 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-config\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.460950 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccdbr\" (UniqueName: \"kubernetes.io/projected/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-kube-api-access-ccdbr\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.460971 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.460991 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-scripts\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.461026 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-config\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.461095 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.461126 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.461145 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.461173 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.461192 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.461212 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-dns-svc\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.462077 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.462152 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-dns-svc\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.463397 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.465297 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-config\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.465983 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.466209 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-config\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.466525 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.467210 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-scripts\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.471157 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.472866 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.509290 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc6nc\" (UniqueName: \"kubernetes.io/projected/9e84dd25-4828-43e5-80a8-25307b77944f-kube-api-access-mc6nc\") pod \"dnsmasq-dns-8554648995-68ff6\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.512398 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccdbr\" (UniqueName: \"kubernetes.io/projected/73dcc2c6-9ccf-4682-bd39-3c439d4691a2-kube-api-access-ccdbr\") pod \"ovn-northd-0\" (UID: \"73dcc2c6-9ccf-4682-bd39-3c439d4691a2\") " pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.574234 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.576109 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.682132 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.688813 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.712994 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.879896 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xdb6r"] Nov 24 09:10:23 crc kubenswrapper[4719]: W1124 09:10:23.911946 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bc3fe26_9fdd_4077_b4e1_6f9a35219a21.slice/crio-01aed07d6e694d29cac83cc585d3257033aa1f70c76bdfecf6d6cbaf8681e930 WatchSource:0}: Error finding container 01aed07d6e694d29cac83cc585d3257033aa1f70c76bdfecf6d6cbaf8681e930: Status 404 returned error can't find the container with id 01aed07d6e694d29cac83cc585d3257033aa1f70c76bdfecf6d6cbaf8681e930 Nov 24 09:10:23 crc kubenswrapper[4719]: I1124 09:10:23.959348 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-x6zhj"] Nov 24 09:10:23 crc kubenswrapper[4719]: W1124 09:10:23.974357 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podceda8ef7_a576_4bb4_ab82_104436789689.slice/crio-fdb8fc89059070ee36b34d7ae48886eaa7cf85137a0f4264925bee944c9bf152 WatchSource:0}: Error finding container fdb8fc89059070ee36b34d7ae48886eaa7cf85137a0f4264925bee944c9bf152: Status 404 returned error can't find the container with id fdb8fc89059070ee36b34d7ae48886eaa7cf85137a0f4264925bee944c9bf152 Nov 24 09:10:24 crc kubenswrapper[4719]: I1124 09:10:24.043394 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-68ff6"] Nov 24 09:10:24 crc kubenswrapper[4719]: W1124 09:10:24.077519 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e84dd25_4828_43e5_80a8_25307b77944f.slice/crio-b87e4c0d23ab5d186e1d039dbba4b254e62416371e1de4aa586f5e25f336a0c5 WatchSource:0}: Error finding container b87e4c0d23ab5d186e1d039dbba4b254e62416371e1de4aa586f5e25f336a0c5: Status 404 returned error can't find the container with id b87e4c0d23ab5d186e1d039dbba4b254e62416371e1de4aa586f5e25f336a0c5 Nov 24 09:10:24 crc kubenswrapper[4719]: I1124 09:10:24.328631 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 09:10:24 crc kubenswrapper[4719]: W1124 09:10:24.336719 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73dcc2c6_9ccf_4682_bd39_3c439d4691a2.slice/crio-9e79751a761271c7066f106e27f4af0ea25a37286c0f9cebf92ec8064f57a01c WatchSource:0}: Error finding container 9e79751a761271c7066f106e27f4af0ea25a37286c0f9cebf92ec8064f57a01c: Status 404 returned error can't find the container with id 9e79751a761271c7066f106e27f4af0ea25a37286c0f9cebf92ec8064f57a01c Nov 24 09:10:24 crc kubenswrapper[4719]: I1124 09:10:24.448372 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xdb6r" event={"ID":"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21","Type":"ContainerStarted","Data":"01aed07d6e694d29cac83cc585d3257033aa1f70c76bdfecf6d6cbaf8681e930"} Nov 24 09:10:24 crc kubenswrapper[4719]: I1124 09:10:24.451614 4719 generic.go:334] "Generic (PLEG): container finished" podID="ec125b11-d40b-4268-835b-293b46fca475" containerID="22548371c6bd49c88d9127ece85281b776b80ffabe69bdd9064bafc4fa99dbfd" exitCode=0 Nov 24 09:10:24 crc kubenswrapper[4719]: I1124 09:10:24.451692 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-srcvw" event={"ID":"ec125b11-d40b-4268-835b-293b46fca475","Type":"ContainerDied","Data":"22548371c6bd49c88d9127ece85281b776b80ffabe69bdd9064bafc4fa99dbfd"} Nov 24 09:10:24 crc kubenswrapper[4719]: I1124 09:10:24.452889 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" event={"ID":"ceda8ef7-a576-4bb4-ab82-104436789689","Type":"ContainerStarted","Data":"fdb8fc89059070ee36b34d7ae48886eaa7cf85137a0f4264925bee944c9bf152"} Nov 24 09:10:24 crc kubenswrapper[4719]: I1124 09:10:24.454265 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"73dcc2c6-9ccf-4682-bd39-3c439d4691a2","Type":"ContainerStarted","Data":"9e79751a761271c7066f106e27f4af0ea25a37286c0f9cebf92ec8064f57a01c"} Nov 24 09:10:24 crc kubenswrapper[4719]: I1124 09:10:24.455432 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-68ff6" event={"ID":"9e84dd25-4828-43e5-80a8-25307b77944f","Type":"ContainerStarted","Data":"b87e4c0d23ab5d186e1d039dbba4b254e62416371e1de4aa586f5e25f336a0c5"} Nov 24 09:10:24 crc kubenswrapper[4719]: I1124 09:10:24.544210 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.387428 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-2hszz"] Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.388720 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2hszz" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.394794 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3bfe-account-create-l5xls"] Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.396429 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3bfe-account-create-l5xls" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.400368 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.400670 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2hszz"] Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.425409 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3bfe-account-create-l5xls"] Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.449233 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-qbjmv"] Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.460756 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qbjmv" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.482247 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qbjmv"] Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.506163 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230fc0d1-ff11-476a-82be-177f83a0e81f-operator-scripts\") pod \"placement-db-create-qbjmv\" (UID: \"230fc0d1-ff11-476a-82be-177f83a0e81f\") " pod="openstack/placement-db-create-qbjmv" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.506274 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d86e1d-9f0f-4aec-a19c-a02a05a34319-operator-scripts\") pod \"keystone-3bfe-account-create-l5xls\" (UID: \"17d86e1d-9f0f-4aec-a19c-a02a05a34319\") " pod="openstack/keystone-3bfe-account-create-l5xls" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.507213 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr98h\" (UniqueName: \"kubernetes.io/projected/17d86e1d-9f0f-4aec-a19c-a02a05a34319-kube-api-access-lr98h\") pod \"keystone-3bfe-account-create-l5xls\" (UID: \"17d86e1d-9f0f-4aec-a19c-a02a05a34319\") " pod="openstack/keystone-3bfe-account-create-l5xls" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.507298 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e32e613-504a-4221-a5ea-29c4768e4ef9-operator-scripts\") pod \"keystone-db-create-2hszz\" (UID: \"9e32e613-504a-4221-a5ea-29c4768e4ef9\") " pod="openstack/keystone-db-create-2hszz" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.507350 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c5rg\" (UniqueName: \"kubernetes.io/projected/230fc0d1-ff11-476a-82be-177f83a0e81f-kube-api-access-7c5rg\") pod \"placement-db-create-qbjmv\" (UID: \"230fc0d1-ff11-476a-82be-177f83a0e81f\") " pod="openstack/placement-db-create-qbjmv" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.507403 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhr24\" (UniqueName: \"kubernetes.io/projected/9e32e613-504a-4221-a5ea-29c4768e4ef9-kube-api-access-vhr24\") pod \"keystone-db-create-2hszz\" (UID: \"9e32e613-504a-4221-a5ea-29c4768e4ef9\") " pod="openstack/keystone-db-create-2hszz" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.538500 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-892d-account-create-rtmf4"] Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.539859 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-892d-account-create-rtmf4" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.546722 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.553146 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-892d-account-create-rtmf4"] Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.608672 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e32e613-504a-4221-a5ea-29c4768e4ef9-operator-scripts\") pod \"keystone-db-create-2hszz\" (UID: \"9e32e613-504a-4221-a5ea-29c4768e4ef9\") " pod="openstack/keystone-db-create-2hszz" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.608720 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c5rg\" (UniqueName: \"kubernetes.io/projected/230fc0d1-ff11-476a-82be-177f83a0e81f-kube-api-access-7c5rg\") pod \"placement-db-create-qbjmv\" (UID: \"230fc0d1-ff11-476a-82be-177f83a0e81f\") " pod="openstack/placement-db-create-qbjmv" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.608743 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w62ng\" (UniqueName: \"kubernetes.io/projected/6e9b95eb-5130-4d13-9557-fe979505e602-kube-api-access-w62ng\") pod \"placement-892d-account-create-rtmf4\" (UID: \"6e9b95eb-5130-4d13-9557-fe979505e602\") " pod="openstack/placement-892d-account-create-rtmf4" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.608779 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhr24\" (UniqueName: \"kubernetes.io/projected/9e32e613-504a-4221-a5ea-29c4768e4ef9-kube-api-access-vhr24\") pod \"keystone-db-create-2hszz\" (UID: \"9e32e613-504a-4221-a5ea-29c4768e4ef9\") " pod="openstack/keystone-db-create-2hszz" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.608810 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230fc0d1-ff11-476a-82be-177f83a0e81f-operator-scripts\") pod \"placement-db-create-qbjmv\" (UID: \"230fc0d1-ff11-476a-82be-177f83a0e81f\") " pod="openstack/placement-db-create-qbjmv" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.608837 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9b95eb-5130-4d13-9557-fe979505e602-operator-scripts\") pod \"placement-892d-account-create-rtmf4\" (UID: \"6e9b95eb-5130-4d13-9557-fe979505e602\") " pod="openstack/placement-892d-account-create-rtmf4" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.608938 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d86e1d-9f0f-4aec-a19c-a02a05a34319-operator-scripts\") pod \"keystone-3bfe-account-create-l5xls\" (UID: \"17d86e1d-9f0f-4aec-a19c-a02a05a34319\") " pod="openstack/keystone-3bfe-account-create-l5xls" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.608979 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr98h\" (UniqueName: \"kubernetes.io/projected/17d86e1d-9f0f-4aec-a19c-a02a05a34319-kube-api-access-lr98h\") pod \"keystone-3bfe-account-create-l5xls\" (UID: \"17d86e1d-9f0f-4aec-a19c-a02a05a34319\") " pod="openstack/keystone-3bfe-account-create-l5xls" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.610525 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230fc0d1-ff11-476a-82be-177f83a0e81f-operator-scripts\") pod \"placement-db-create-qbjmv\" (UID: \"230fc0d1-ff11-476a-82be-177f83a0e81f\") " pod="openstack/placement-db-create-qbjmv" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.610825 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d86e1d-9f0f-4aec-a19c-a02a05a34319-operator-scripts\") pod \"keystone-3bfe-account-create-l5xls\" (UID: \"17d86e1d-9f0f-4aec-a19c-a02a05a34319\") " pod="openstack/keystone-3bfe-account-create-l5xls" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.610981 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e32e613-504a-4221-a5ea-29c4768e4ef9-operator-scripts\") pod \"keystone-db-create-2hszz\" (UID: \"9e32e613-504a-4221-a5ea-29c4768e4ef9\") " pod="openstack/keystone-db-create-2hszz" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.628387 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c5rg\" (UniqueName: \"kubernetes.io/projected/230fc0d1-ff11-476a-82be-177f83a0e81f-kube-api-access-7c5rg\") pod \"placement-db-create-qbjmv\" (UID: \"230fc0d1-ff11-476a-82be-177f83a0e81f\") " pod="openstack/placement-db-create-qbjmv" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.628816 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhr24\" (UniqueName: \"kubernetes.io/projected/9e32e613-504a-4221-a5ea-29c4768e4ef9-kube-api-access-vhr24\") pod \"keystone-db-create-2hszz\" (UID: \"9e32e613-504a-4221-a5ea-29c4768e4ef9\") " pod="openstack/keystone-db-create-2hszz" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.629221 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr98h\" (UniqueName: \"kubernetes.io/projected/17d86e1d-9f0f-4aec-a19c-a02a05a34319-kube-api-access-lr98h\") pod \"keystone-3bfe-account-create-l5xls\" (UID: \"17d86e1d-9f0f-4aec-a19c-a02a05a34319\") " pod="openstack/keystone-3bfe-account-create-l5xls" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.709419 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2hszz" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.710225 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w62ng\" (UniqueName: \"kubernetes.io/projected/6e9b95eb-5130-4d13-9557-fe979505e602-kube-api-access-w62ng\") pod \"placement-892d-account-create-rtmf4\" (UID: \"6e9b95eb-5130-4d13-9557-fe979505e602\") " pod="openstack/placement-892d-account-create-rtmf4" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.710351 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9b95eb-5130-4d13-9557-fe979505e602-operator-scripts\") pod \"placement-892d-account-create-rtmf4\" (UID: \"6e9b95eb-5130-4d13-9557-fe979505e602\") " pod="openstack/placement-892d-account-create-rtmf4" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.711785 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9b95eb-5130-4d13-9557-fe979505e602-operator-scripts\") pod \"placement-892d-account-create-rtmf4\" (UID: \"6e9b95eb-5130-4d13-9557-fe979505e602\") " pod="openstack/placement-892d-account-create-rtmf4" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.720825 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3bfe-account-create-l5xls" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.736784 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w62ng\" (UniqueName: \"kubernetes.io/projected/6e9b95eb-5130-4d13-9557-fe979505e602-kube-api-access-w62ng\") pod \"placement-892d-account-create-rtmf4\" (UID: \"6e9b95eb-5130-4d13-9557-fe979505e602\") " pod="openstack/placement-892d-account-create-rtmf4" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.788438 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qbjmv" Nov 24 09:10:25 crc kubenswrapper[4719]: I1124 09:10:25.857796 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-892d-account-create-rtmf4" Nov 24 09:10:26 crc kubenswrapper[4719]: I1124 09:10:26.337179 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qbjmv"] Nov 24 09:10:26 crc kubenswrapper[4719]: W1124 09:10:26.342397 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod230fc0d1_ff11_476a_82be_177f83a0e81f.slice/crio-7aa076c130e8a8893bd3d44862ab79408e0053b15afa1fd42df6c323f03b08e0 WatchSource:0}: Error finding container 7aa076c130e8a8893bd3d44862ab79408e0053b15afa1fd42df6c323f03b08e0: Status 404 returned error can't find the container with id 7aa076c130e8a8893bd3d44862ab79408e0053b15afa1fd42df6c323f03b08e0 Nov 24 09:10:26 crc kubenswrapper[4719]: I1124 09:10:26.371362 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" podUID="70f5a384-410e-4e03-a5bb-af88b26f8cb8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.99:5353: connect: connection refused" Nov 24 09:10:26 crc kubenswrapper[4719]: I1124 09:10:26.478152 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qbjmv" event={"ID":"230fc0d1-ff11-476a-82be-177f83a0e81f","Type":"ContainerStarted","Data":"7aa076c130e8a8893bd3d44862ab79408e0053b15afa1fd42df6c323f03b08e0"} Nov 24 09:10:26 crc kubenswrapper[4719]: I1124 09:10:26.574126 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3bfe-account-create-l5xls"] Nov 24 09:10:26 crc kubenswrapper[4719]: I1124 09:10:26.685495 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-892d-account-create-rtmf4"] Nov 24 09:10:26 crc kubenswrapper[4719]: W1124 09:10:26.700877 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e9b95eb_5130_4d13_9557_fe979505e602.slice/crio-6d829d504d042191a669f62451f27f649625aebe62a3a7bae722391c6e851c59 WatchSource:0}: Error finding container 6d829d504d042191a669f62451f27f649625aebe62a3a7bae722391c6e851c59: Status 404 returned error can't find the container with id 6d829d504d042191a669f62451f27f649625aebe62a3a7bae722391c6e851c59 Nov 24 09:10:26 crc kubenswrapper[4719]: I1124 09:10:26.701594 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2hszz"] Nov 24 09:10:26 crc kubenswrapper[4719]: I1124 09:10:26.780536 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.115375 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.138606 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4rg\" (UniqueName: \"kubernetes.io/projected/70f5a384-410e-4e03-a5bb-af88b26f8cb8-kube-api-access-7c4rg\") pod \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\" (UID: \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\") " Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.138795 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f5a384-410e-4e03-a5bb-af88b26f8cb8-config\") pod \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\" (UID: \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\") " Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.138877 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70f5a384-410e-4e03-a5bb-af88b26f8cb8-dns-svc\") pod \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\" (UID: \"70f5a384-410e-4e03-a5bb-af88b26f8cb8\") " Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.154264 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f5a384-410e-4e03-a5bb-af88b26f8cb8-kube-api-access-7c4rg" (OuterVolumeSpecName: "kube-api-access-7c4rg") pod "70f5a384-410e-4e03-a5bb-af88b26f8cb8" (UID: "70f5a384-410e-4e03-a5bb-af88b26f8cb8"). InnerVolumeSpecName "kube-api-access-7c4rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.195697 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f5a384-410e-4e03-a5bb-af88b26f8cb8-config" (OuterVolumeSpecName: "config") pod "70f5a384-410e-4e03-a5bb-af88b26f8cb8" (UID: "70f5a384-410e-4e03-a5bb-af88b26f8cb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.195918 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f5a384-410e-4e03-a5bb-af88b26f8cb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70f5a384-410e-4e03-a5bb-af88b26f8cb8" (UID: "70f5a384-410e-4e03-a5bb-af88b26f8cb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.225205 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-srcvw" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.241604 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec125b11-d40b-4268-835b-293b46fca475-dns-svc\") pod \"ec125b11-d40b-4268-835b-293b46fca475\" (UID: \"ec125b11-d40b-4268-835b-293b46fca475\") " Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.241809 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pt8t\" (UniqueName: \"kubernetes.io/projected/ec125b11-d40b-4268-835b-293b46fca475-kube-api-access-4pt8t\") pod \"ec125b11-d40b-4268-835b-293b46fca475\" (UID: \"ec125b11-d40b-4268-835b-293b46fca475\") " Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.241836 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec125b11-d40b-4268-835b-293b46fca475-config\") pod \"ec125b11-d40b-4268-835b-293b46fca475\" (UID: \"ec125b11-d40b-4268-835b-293b46fca475\") " Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.242287 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4rg\" (UniqueName: \"kubernetes.io/projected/70f5a384-410e-4e03-a5bb-af88b26f8cb8-kube-api-access-7c4rg\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.242312 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f5a384-410e-4e03-a5bb-af88b26f8cb8-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.242322 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70f5a384-410e-4e03-a5bb-af88b26f8cb8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.245996 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec125b11-d40b-4268-835b-293b46fca475-kube-api-access-4pt8t" (OuterVolumeSpecName: "kube-api-access-4pt8t") pod "ec125b11-d40b-4268-835b-293b46fca475" (UID: "ec125b11-d40b-4268-835b-293b46fca475"). InnerVolumeSpecName "kube-api-access-4pt8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.292827 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec125b11-d40b-4268-835b-293b46fca475-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec125b11-d40b-4268-835b-293b46fca475" (UID: "ec125b11-d40b-4268-835b-293b46fca475"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.333748 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec125b11-d40b-4268-835b-293b46fca475-config" (OuterVolumeSpecName: "config") pod "ec125b11-d40b-4268-835b-293b46fca475" (UID: "ec125b11-d40b-4268-835b-293b46fca475"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.343632 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pt8t\" (UniqueName: \"kubernetes.io/projected/ec125b11-d40b-4268-835b-293b46fca475-kube-api-access-4pt8t\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.343667 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec125b11-d40b-4268-835b-293b46fca475-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.343685 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec125b11-d40b-4268-835b-293b46fca475-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.486449 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" event={"ID":"70f5a384-410e-4e03-a5bb-af88b26f8cb8","Type":"ContainerDied","Data":"5369efc34bc0cbab5cfab5ef8f0336035ca4d90bf5b66faff2ad51a606d6d0ab"} Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.487155 4719 scope.go:117] "RemoveContainer" containerID="f2eb0d47feb581c4c87f1023e9e77779690c6c658400d561b32f306803baae7f" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.486480 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8lw6x" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.488917 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-68ff6" event={"ID":"9e84dd25-4828-43e5-80a8-25307b77944f","Type":"ContainerStarted","Data":"3d6a584fa4445dc408eedaf1d6e870521f71641f51da0f8c7ee432a33755c167"} Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.490412 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3bfe-account-create-l5xls" event={"ID":"17d86e1d-9f0f-4aec-a19c-a02a05a34319","Type":"ContainerStarted","Data":"a7be9d6016c1ba6142798101dddf908e6e8a68104afb076135361c54626f8c92"} Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.492058 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2hszz" event={"ID":"9e32e613-504a-4221-a5ea-29c4768e4ef9","Type":"ContainerStarted","Data":"6cd1dd163548e9062bc28382e3808ba7441fc7d45a433049b59961907bb55e72"} Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.498369 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xdb6r" event={"ID":"7bc3fe26-9fdd-4077-b4e1-6f9a35219a21","Type":"ContainerStarted","Data":"4b0b48b6f03e4c9c7e46108d93641f4ee5f947af08bec56a10e74c674c9241cf"} Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.500126 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" event={"ID":"ceda8ef7-a576-4bb4-ab82-104436789689","Type":"ContainerStarted","Data":"25741d6dc8d434939062f9c43f4a033bc8c99e4f03303cc4160852923ec64231"} Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.501465 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-892d-account-create-rtmf4" event={"ID":"6e9b95eb-5130-4d13-9557-fe979505e602","Type":"ContainerStarted","Data":"6d829d504d042191a669f62451f27f649625aebe62a3a7bae722391c6e851c59"} Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.502729 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qbjmv" event={"ID":"230fc0d1-ff11-476a-82be-177f83a0e81f","Type":"ContainerStarted","Data":"3e9bb87b7ae6edde755e9a7f64e058c546a81c25c3c38ce0ccd29af7f89bc40c"} Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.507566 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-srcvw" event={"ID":"ec125b11-d40b-4268-835b-293b46fca475","Type":"ContainerDied","Data":"c3322f4864f0c5fa3b9024be65b54a650561a91ad84d869b467b7b5941f97b7b"} Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.507614 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-srcvw" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.512850 4719 scope.go:117] "RemoveContainer" containerID="ac619c8d8744fedf1fc601c555b00f6044ad6e6f5c4d856aa298926528e736d4" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.528058 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8lw6x"] Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.550052 4719 scope.go:117] "RemoveContainer" containerID="22548371c6bd49c88d9127ece85281b776b80ffabe69bdd9064bafc4fa99dbfd" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.553101 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8lw6x"] Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.564682 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-srcvw"] Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.568320 4719 scope.go:117] "RemoveContainer" containerID="cb6a5222ecaa99669d36811844c388ce8d19257a19a5ff2f24c0b4474c7e0e00" Nov 24 09:10:27 crc kubenswrapper[4719]: I1124 09:10:27.570294 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-srcvw"] Nov 24 09:10:28 crc kubenswrapper[4719]: I1124 09:10:28.517525 4719 generic.go:334] "Generic (PLEG): container finished" podID="9e84dd25-4828-43e5-80a8-25307b77944f" containerID="3d6a584fa4445dc408eedaf1d6e870521f71641f51da0f8c7ee432a33755c167" exitCode=0 Nov 24 09:10:28 crc kubenswrapper[4719]: I1124 09:10:28.517627 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-68ff6" event={"ID":"9e84dd25-4828-43e5-80a8-25307b77944f","Type":"ContainerDied","Data":"3d6a584fa4445dc408eedaf1d6e870521f71641f51da0f8c7ee432a33755c167"} Nov 24 09:10:28 crc kubenswrapper[4719]: I1124 09:10:28.519584 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-892d-account-create-rtmf4" event={"ID":"6e9b95eb-5130-4d13-9557-fe979505e602","Type":"ContainerStarted","Data":"6f50a492a449685aac62f4cd929b3ea899cadc55f3b5d2b1e0880ae72e9a3b2d"} Nov 24 09:10:28 crc kubenswrapper[4719]: I1124 09:10:28.530933 4719 generic.go:334] "Generic (PLEG): container finished" podID="ceda8ef7-a576-4bb4-ab82-104436789689" containerID="25741d6dc8d434939062f9c43f4a033bc8c99e4f03303cc4160852923ec64231" exitCode=0 Nov 24 09:10:28 crc kubenswrapper[4719]: I1124 09:10:28.545667 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f5a384-410e-4e03-a5bb-af88b26f8cb8" path="/var/lib/kubelet/pods/70f5a384-410e-4e03-a5bb-af88b26f8cb8/volumes" Nov 24 09:10:28 crc kubenswrapper[4719]: I1124 09:10:28.546535 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec125b11-d40b-4268-835b-293b46fca475" path="/var/lib/kubelet/pods/ec125b11-d40b-4268-835b-293b46fca475/volumes" Nov 24 09:10:28 crc kubenswrapper[4719]: I1124 09:10:28.547199 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3bfe-account-create-l5xls" event={"ID":"17d86e1d-9f0f-4aec-a19c-a02a05a34319","Type":"ContainerStarted","Data":"c57f66d7b83e885df01da0887588bcd8d5ba9de9349303fdc2352d89464ce644"} Nov 24 09:10:28 crc kubenswrapper[4719]: I1124 09:10:28.547234 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2hszz" event={"ID":"9e32e613-504a-4221-a5ea-29c4768e4ef9","Type":"ContainerStarted","Data":"86a840314ef2a6eac6790008c5fb77711b8e35643341882568031a4b44a17e9e"} Nov 24 09:10:28 crc kubenswrapper[4719]: I1124 09:10:28.547255 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" event={"ID":"ceda8ef7-a576-4bb4-ab82-104436789689","Type":"ContainerDied","Data":"25741d6dc8d434939062f9c43f4a033bc8c99e4f03303cc4160852923ec64231"} Nov 24 09:10:28 crc kubenswrapper[4719]: I1124 09:10:28.562800 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-3bfe-account-create-l5xls" podStartSLOduration=3.562760907 podStartE2EDuration="3.562760907s" podCreationTimestamp="2025-11-24 09:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:28.561352967 +0000 UTC m=+1004.892626229" watchObservedRunningTime="2025-11-24 09:10:28.562760907 +0000 UTC m=+1004.894034169" Nov 24 09:10:28 crc kubenswrapper[4719]: I1124 09:10:28.594808 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-qbjmv" podStartSLOduration=3.594782753 podStartE2EDuration="3.594782753s" podCreationTimestamp="2025-11-24 09:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:28.578885403 +0000 UTC m=+1004.910158655" watchObservedRunningTime="2025-11-24 09:10:28.594782753 +0000 UTC m=+1004.926056015" Nov 24 09:10:28 crc kubenswrapper[4719]: I1124 09:10:28.603411 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-2hszz" podStartSLOduration=3.603390381 podStartE2EDuration="3.603390381s" podCreationTimestamp="2025-11-24 09:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:28.595612167 +0000 UTC m=+1004.926885439" watchObservedRunningTime="2025-11-24 09:10:28.603390381 +0000 UTC m=+1004.934663633" Nov 24 09:10:28 crc kubenswrapper[4719]: I1124 09:10:28.650812 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xdb6r" podStartSLOduration=6.650787691 podStartE2EDuration="6.650787691s" podCreationTimestamp="2025-11-24 09:10:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:28.635272913 +0000 UTC m=+1004.966546165" watchObservedRunningTime="2025-11-24 09:10:28.650787691 +0000 UTC m=+1004.982060933" Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.539264 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-68ff6" event={"ID":"9e84dd25-4828-43e5-80a8-25307b77944f","Type":"ContainerStarted","Data":"5aca69983d94956e6f451ad5e0919275f0d18427dee24d4b8ca5a1d0d74f7d28"} Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.539404 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.540252 4719 generic.go:334] "Generic (PLEG): container finished" podID="6e9b95eb-5130-4d13-9557-fe979505e602" containerID="6f50a492a449685aac62f4cd929b3ea899cadc55f3b5d2b1e0880ae72e9a3b2d" exitCode=0 Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.540328 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-892d-account-create-rtmf4" event={"ID":"6e9b95eb-5130-4d13-9557-fe979505e602","Type":"ContainerDied","Data":"6f50a492a449685aac62f4cd929b3ea899cadc55f3b5d2b1e0880ae72e9a3b2d"} Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.541769 4719 generic.go:334] "Generic (PLEG): container finished" podID="230fc0d1-ff11-476a-82be-177f83a0e81f" containerID="3e9bb87b7ae6edde755e9a7f64e058c546a81c25c3c38ce0ccd29af7f89bc40c" exitCode=0 Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.541831 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qbjmv" event={"ID":"230fc0d1-ff11-476a-82be-177f83a0e81f","Type":"ContainerDied","Data":"3e9bb87b7ae6edde755e9a7f64e058c546a81c25c3c38ce0ccd29af7f89bc40c"} Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.544062 4719 generic.go:334] "Generic (PLEG): container finished" podID="17d86e1d-9f0f-4aec-a19c-a02a05a34319" containerID="c57f66d7b83e885df01da0887588bcd8d5ba9de9349303fdc2352d89464ce644" exitCode=0 Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.544210 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3bfe-account-create-l5xls" event={"ID":"17d86e1d-9f0f-4aec-a19c-a02a05a34319","Type":"ContainerDied","Data":"c57f66d7b83e885df01da0887588bcd8d5ba9de9349303fdc2352d89464ce644"} Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.546336 4719 generic.go:334] "Generic (PLEG): container finished" podID="9e32e613-504a-4221-a5ea-29c4768e4ef9" containerID="86a840314ef2a6eac6790008c5fb77711b8e35643341882568031a4b44a17e9e" exitCode=0 Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.546437 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2hszz" event={"ID":"9e32e613-504a-4221-a5ea-29c4768e4ef9","Type":"ContainerDied","Data":"86a840314ef2a6eac6790008c5fb77711b8e35643341882568031a4b44a17e9e"} Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.548372 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" event={"ID":"ceda8ef7-a576-4bb4-ab82-104436789689","Type":"ContainerStarted","Data":"ae63f37f6a930ac2847432c1339085d1010c17fd258d014c25e01b39e0321844"} Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.548508 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.563687 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-892d-account-create-rtmf4" podStartSLOduration=4.56366456 podStartE2EDuration="4.56366456s" podCreationTimestamp="2025-11-24 09:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:28.662888021 +0000 UTC m=+1004.994161273" watchObservedRunningTime="2025-11-24 09:10:29.56366456 +0000 UTC m=+1005.894937812" Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.564617 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-68ff6" podStartSLOduration=6.564610797 podStartE2EDuration="6.564610797s" podCreationTimestamp="2025-11-24 09:10:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:29.557531433 +0000 UTC m=+1005.888804695" watchObservedRunningTime="2025-11-24 09:10:29.564610797 +0000 UTC m=+1005.895884049" Nov 24 09:10:29 crc kubenswrapper[4719]: I1124 09:10:29.653836 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" podStartSLOduration=7.653809935 podStartE2EDuration="7.653809935s" podCreationTimestamp="2025-11-24 09:10:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:29.645419102 +0000 UTC m=+1005.976692364" watchObservedRunningTime="2025-11-24 09:10:29.653809935 +0000 UTC m=+1005.985083197" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.562090 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"73dcc2c6-9ccf-4682-bd39-3c439d4691a2","Type":"ContainerStarted","Data":"5241486058cc41d38f2035e6c15ea05e88d252e48635dad92dee320fd3b9bb10"} Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.562158 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"73dcc2c6-9ccf-4682-bd39-3c439d4691a2","Type":"ContainerStarted","Data":"eb61ae8204ed42e8898b52456c4ae0a64901e0295437806af054cab4080f3d40"} Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.562727 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.585751 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.508913464 podStartE2EDuration="7.585735614s" podCreationTimestamp="2025-11-24 09:10:23 +0000 UTC" firstStartedPulling="2025-11-24 09:10:24.338908125 +0000 UTC m=+1000.670181377" lastFinishedPulling="2025-11-24 09:10:29.415730275 +0000 UTC m=+1005.747003527" observedRunningTime="2025-11-24 09:10:30.578222947 +0000 UTC m=+1006.909496199" watchObservedRunningTime="2025-11-24 09:10:30.585735614 +0000 UTC m=+1006.917008866" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.670387 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jx67j"] Nov 24 09:10:30 crc kubenswrapper[4719]: E1124 09:10:30.670688 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec125b11-d40b-4268-835b-293b46fca475" containerName="dnsmasq-dns" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.670701 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec125b11-d40b-4268-835b-293b46fca475" containerName="dnsmasq-dns" Nov 24 09:10:30 crc kubenswrapper[4719]: E1124 09:10:30.670711 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70f5a384-410e-4e03-a5bb-af88b26f8cb8" containerName="init" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.670718 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f5a384-410e-4e03-a5bb-af88b26f8cb8" containerName="init" Nov 24 09:10:30 crc kubenswrapper[4719]: E1124 09:10:30.670727 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70f5a384-410e-4e03-a5bb-af88b26f8cb8" containerName="dnsmasq-dns" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.670733 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f5a384-410e-4e03-a5bb-af88b26f8cb8" containerName="dnsmasq-dns" Nov 24 09:10:30 crc kubenswrapper[4719]: E1124 09:10:30.670756 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec125b11-d40b-4268-835b-293b46fca475" containerName="init" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.670762 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec125b11-d40b-4268-835b-293b46fca475" containerName="init" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.670911 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec125b11-d40b-4268-835b-293b46fca475" containerName="dnsmasq-dns" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.670937 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="70f5a384-410e-4e03-a5bb-af88b26f8cb8" containerName="dnsmasq-dns" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.671496 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jx67j" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.704666 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jx67j"] Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.705253 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh4bg\" (UniqueName: \"kubernetes.io/projected/362cf151-7819-46b5-9b25-2f42aa6370ac-kube-api-access-lh4bg\") pod \"glance-db-create-jx67j\" (UID: \"362cf151-7819-46b5-9b25-2f42aa6370ac\") " pod="openstack/glance-db-create-jx67j" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.705302 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/362cf151-7819-46b5-9b25-2f42aa6370ac-operator-scripts\") pod \"glance-db-create-jx67j\" (UID: \"362cf151-7819-46b5-9b25-2f42aa6370ac\") " pod="openstack/glance-db-create-jx67j" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.766662 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-36b5-account-create-vwrrf"] Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.768139 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-36b5-account-create-vwrrf" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.776910 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-36b5-account-create-vwrrf"] Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.806057 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f6fad86-e72c-41c1-8322-614721929c2a-operator-scripts\") pod \"glance-36b5-account-create-vwrrf\" (UID: \"3f6fad86-e72c-41c1-8322-614721929c2a\") " pod="openstack/glance-36b5-account-create-vwrrf" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.806118 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh4bg\" (UniqueName: \"kubernetes.io/projected/362cf151-7819-46b5-9b25-2f42aa6370ac-kube-api-access-lh4bg\") pod \"glance-db-create-jx67j\" (UID: \"362cf151-7819-46b5-9b25-2f42aa6370ac\") " pod="openstack/glance-db-create-jx67j" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.806156 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/362cf151-7819-46b5-9b25-2f42aa6370ac-operator-scripts\") pod \"glance-db-create-jx67j\" (UID: \"362cf151-7819-46b5-9b25-2f42aa6370ac\") " pod="openstack/glance-db-create-jx67j" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.806181 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntv5f\" (UniqueName: \"kubernetes.io/projected/3f6fad86-e72c-41c1-8322-614721929c2a-kube-api-access-ntv5f\") pod \"glance-36b5-account-create-vwrrf\" (UID: \"3f6fad86-e72c-41c1-8322-614721929c2a\") " pod="openstack/glance-36b5-account-create-vwrrf" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.807739 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/362cf151-7819-46b5-9b25-2f42aa6370ac-operator-scripts\") pod \"glance-db-create-jx67j\" (UID: \"362cf151-7819-46b5-9b25-2f42aa6370ac\") " pod="openstack/glance-db-create-jx67j" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.809135 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.869691 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh4bg\" (UniqueName: \"kubernetes.io/projected/362cf151-7819-46b5-9b25-2f42aa6370ac-kube-api-access-lh4bg\") pod \"glance-db-create-jx67j\" (UID: \"362cf151-7819-46b5-9b25-2f42aa6370ac\") " pod="openstack/glance-db-create-jx67j" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.908175 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f6fad86-e72c-41c1-8322-614721929c2a-operator-scripts\") pod \"glance-36b5-account-create-vwrrf\" (UID: \"3f6fad86-e72c-41c1-8322-614721929c2a\") " pod="openstack/glance-36b5-account-create-vwrrf" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.908287 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntv5f\" (UniqueName: \"kubernetes.io/projected/3f6fad86-e72c-41c1-8322-614721929c2a-kube-api-access-ntv5f\") pod \"glance-36b5-account-create-vwrrf\" (UID: \"3f6fad86-e72c-41c1-8322-614721929c2a\") " pod="openstack/glance-36b5-account-create-vwrrf" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.911280 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f6fad86-e72c-41c1-8322-614721929c2a-operator-scripts\") pod \"glance-36b5-account-create-vwrrf\" (UID: \"3f6fad86-e72c-41c1-8322-614721929c2a\") " pod="openstack/glance-36b5-account-create-vwrrf" Nov 24 09:10:30 crc kubenswrapper[4719]: I1124 09:10:30.924598 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntv5f\" (UniqueName: \"kubernetes.io/projected/3f6fad86-e72c-41c1-8322-614721929c2a-kube-api-access-ntv5f\") pod \"glance-36b5-account-create-vwrrf\" (UID: \"3f6fad86-e72c-41c1-8322-614721929c2a\") " pod="openstack/glance-36b5-account-create-vwrrf" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.031459 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jx67j" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.044077 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2hszz" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.104213 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qbjmv" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.108113 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3bfe-account-create-l5xls" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.115354 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-892d-account-create-rtmf4" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.130728 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-36b5-account-create-vwrrf" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.136908 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230fc0d1-ff11-476a-82be-177f83a0e81f-operator-scripts\") pod \"230fc0d1-ff11-476a-82be-177f83a0e81f\" (UID: \"230fc0d1-ff11-476a-82be-177f83a0e81f\") " Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.137087 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c5rg\" (UniqueName: \"kubernetes.io/projected/230fc0d1-ff11-476a-82be-177f83a0e81f-kube-api-access-7c5rg\") pod \"230fc0d1-ff11-476a-82be-177f83a0e81f\" (UID: \"230fc0d1-ff11-476a-82be-177f83a0e81f\") " Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.137678 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/230fc0d1-ff11-476a-82be-177f83a0e81f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "230fc0d1-ff11-476a-82be-177f83a0e81f" (UID: "230fc0d1-ff11-476a-82be-177f83a0e81f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.139178 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e32e613-504a-4221-a5ea-29c4768e4ef9-operator-scripts\") pod \"9e32e613-504a-4221-a5ea-29c4768e4ef9\" (UID: \"9e32e613-504a-4221-a5ea-29c4768e4ef9\") " Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.139219 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d86e1d-9f0f-4aec-a19c-a02a05a34319-operator-scripts\") pod \"17d86e1d-9f0f-4aec-a19c-a02a05a34319\" (UID: \"17d86e1d-9f0f-4aec-a19c-a02a05a34319\") " Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.139243 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w62ng\" (UniqueName: \"kubernetes.io/projected/6e9b95eb-5130-4d13-9557-fe979505e602-kube-api-access-w62ng\") pod \"6e9b95eb-5130-4d13-9557-fe979505e602\" (UID: \"6e9b95eb-5130-4d13-9557-fe979505e602\") " Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.139261 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr98h\" (UniqueName: \"kubernetes.io/projected/17d86e1d-9f0f-4aec-a19c-a02a05a34319-kube-api-access-lr98h\") pod \"17d86e1d-9f0f-4aec-a19c-a02a05a34319\" (UID: \"17d86e1d-9f0f-4aec-a19c-a02a05a34319\") " Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.139736 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17d86e1d-9f0f-4aec-a19c-a02a05a34319-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17d86e1d-9f0f-4aec-a19c-a02a05a34319" (UID: "17d86e1d-9f0f-4aec-a19c-a02a05a34319"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.140303 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e32e613-504a-4221-a5ea-29c4768e4ef9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e32e613-504a-4221-a5ea-29c4768e4ef9" (UID: "9e32e613-504a-4221-a5ea-29c4768e4ef9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.143311 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9b95eb-5130-4d13-9557-fe979505e602-kube-api-access-w62ng" (OuterVolumeSpecName: "kube-api-access-w62ng") pod "6e9b95eb-5130-4d13-9557-fe979505e602" (UID: "6e9b95eb-5130-4d13-9557-fe979505e602"). InnerVolumeSpecName "kube-api-access-w62ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.143597 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9b95eb-5130-4d13-9557-fe979505e602-operator-scripts\") pod \"6e9b95eb-5130-4d13-9557-fe979505e602\" (UID: \"6e9b95eb-5130-4d13-9557-fe979505e602\") " Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.143784 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d86e1d-9f0f-4aec-a19c-a02a05a34319-kube-api-access-lr98h" (OuterVolumeSpecName: "kube-api-access-lr98h") pod "17d86e1d-9f0f-4aec-a19c-a02a05a34319" (UID: "17d86e1d-9f0f-4aec-a19c-a02a05a34319"). InnerVolumeSpecName "kube-api-access-lr98h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.144354 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e9b95eb-5130-4d13-9557-fe979505e602-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e9b95eb-5130-4d13-9557-fe979505e602" (UID: "6e9b95eb-5130-4d13-9557-fe979505e602"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.143832 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhr24\" (UniqueName: \"kubernetes.io/projected/9e32e613-504a-4221-a5ea-29c4768e4ef9-kube-api-access-vhr24\") pod \"9e32e613-504a-4221-a5ea-29c4768e4ef9\" (UID: \"9e32e613-504a-4221-a5ea-29c4768e4ef9\") " Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.146253 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e9b95eb-5130-4d13-9557-fe979505e602-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.146267 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/230fc0d1-ff11-476a-82be-177f83a0e81f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.146838 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e32e613-504a-4221-a5ea-29c4768e4ef9-kube-api-access-vhr24" (OuterVolumeSpecName: "kube-api-access-vhr24") pod "9e32e613-504a-4221-a5ea-29c4768e4ef9" (UID: "9e32e613-504a-4221-a5ea-29c4768e4ef9"). InnerVolumeSpecName "kube-api-access-vhr24". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.148986 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e32e613-504a-4221-a5ea-29c4768e4ef9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.149007 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d86e1d-9f0f-4aec-a19c-a02a05a34319-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.149017 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w62ng\" (UniqueName: \"kubernetes.io/projected/6e9b95eb-5130-4d13-9557-fe979505e602-kube-api-access-w62ng\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.149027 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr98h\" (UniqueName: \"kubernetes.io/projected/17d86e1d-9f0f-4aec-a19c-a02a05a34319-kube-api-access-lr98h\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.157241 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230fc0d1-ff11-476a-82be-177f83a0e81f-kube-api-access-7c5rg" (OuterVolumeSpecName: "kube-api-access-7c5rg") pod "230fc0d1-ff11-476a-82be-177f83a0e81f" (UID: "230fc0d1-ff11-476a-82be-177f83a0e81f"). InnerVolumeSpecName "kube-api-access-7c5rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.250651 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhr24\" (UniqueName: \"kubernetes.io/projected/9e32e613-504a-4221-a5ea-29c4768e4ef9-kube-api-access-vhr24\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.250686 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c5rg\" (UniqueName: \"kubernetes.io/projected/230fc0d1-ff11-476a-82be-177f83a0e81f-kube-api-access-7c5rg\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.490028 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jx67j"] Nov 24 09:10:31 crc kubenswrapper[4719]: W1124 09:10:31.490636 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod362cf151_7819_46b5_9b25_2f42aa6370ac.slice/crio-89d865d4c578c139422d37f09f2ea29611f46863b4c569b4fd7e22b6e6a6fe21 WatchSource:0}: Error finding container 89d865d4c578c139422d37f09f2ea29611f46863b4c569b4fd7e22b6e6a6fe21: Status 404 returned error can't find the container with id 89d865d4c578c139422d37f09f2ea29611f46863b4c569b4fd7e22b6e6a6fe21 Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.578912 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-36b5-account-create-vwrrf"] Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.587595 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-892d-account-create-rtmf4" event={"ID":"6e9b95eb-5130-4d13-9557-fe979505e602","Type":"ContainerDied","Data":"6d829d504d042191a669f62451f27f649625aebe62a3a7bae722391c6e851c59"} Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.587644 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d829d504d042191a669f62451f27f649625aebe62a3a7bae722391c6e851c59" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.587699 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-892d-account-create-rtmf4" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.590438 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jx67j" event={"ID":"362cf151-7819-46b5-9b25-2f42aa6370ac","Type":"ContainerStarted","Data":"89d865d4c578c139422d37f09f2ea29611f46863b4c569b4fd7e22b6e6a6fe21"} Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.596010 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qbjmv" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.596046 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qbjmv" event={"ID":"230fc0d1-ff11-476a-82be-177f83a0e81f","Type":"ContainerDied","Data":"7aa076c130e8a8893bd3d44862ab79408e0053b15afa1fd42df6c323f03b08e0"} Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.596076 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aa076c130e8a8893bd3d44862ab79408e0053b15afa1fd42df6c323f03b08e0" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.603591 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3bfe-account-create-l5xls" event={"ID":"17d86e1d-9f0f-4aec-a19c-a02a05a34319","Type":"ContainerDied","Data":"a7be9d6016c1ba6142798101dddf908e6e8a68104afb076135361c54626f8c92"} Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.603639 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7be9d6016c1ba6142798101dddf908e6e8a68104afb076135361c54626f8c92" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.603754 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3bfe-account-create-l5xls" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.616244 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2hszz" Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.621065 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2hszz" event={"ID":"9e32e613-504a-4221-a5ea-29c4768e4ef9","Type":"ContainerDied","Data":"6cd1dd163548e9062bc28382e3808ba7441fc7d45a433049b59961907bb55e72"} Nov 24 09:10:31 crc kubenswrapper[4719]: I1124 09:10:31.621109 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cd1dd163548e9062bc28382e3808ba7441fc7d45a433049b59961907bb55e72" Nov 24 09:10:32 crc kubenswrapper[4719]: I1124 09:10:32.626053 4719 generic.go:334] "Generic (PLEG): container finished" podID="362cf151-7819-46b5-9b25-2f42aa6370ac" containerID="c36f2fb55a6a9a55d0f711e2f83b034aeed25958d39828966e4b28b5b463cca2" exitCode=0 Nov 24 09:10:32 crc kubenswrapper[4719]: I1124 09:10:32.626152 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jx67j" event={"ID":"362cf151-7819-46b5-9b25-2f42aa6370ac","Type":"ContainerDied","Data":"c36f2fb55a6a9a55d0f711e2f83b034aeed25958d39828966e4b28b5b463cca2"} Nov 24 09:10:32 crc kubenswrapper[4719]: I1124 09:10:32.630104 4719 generic.go:334] "Generic (PLEG): container finished" podID="3f6fad86-e72c-41c1-8322-614721929c2a" containerID="f356a860b28f62e883b02e18d85a49ed993149b81f030cce90785ddf239c56ce" exitCode=0 Nov 24 09:10:32 crc kubenswrapper[4719]: I1124 09:10:32.630167 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-36b5-account-create-vwrrf" event={"ID":"3f6fad86-e72c-41c1-8322-614721929c2a","Type":"ContainerDied","Data":"f356a860b28f62e883b02e18d85a49ed993149b81f030cce90785ddf239c56ce"} Nov 24 09:10:32 crc kubenswrapper[4719]: I1124 09:10:32.630209 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-36b5-account-create-vwrrf" event={"ID":"3f6fad86-e72c-41c1-8322-614721929c2a","Type":"ContainerStarted","Data":"d9715967b52c06b1fe267a6f79b07530a60fa30e64e3793106a95efc245ba405"} Nov 24 09:10:33 crc kubenswrapper[4719]: I1124 09:10:33.259260 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:33 crc kubenswrapper[4719]: I1124 09:10:33.691197 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:10:33 crc kubenswrapper[4719]: I1124 09:10:33.753197 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-x6zhj"] Nov 24 09:10:33 crc kubenswrapper[4719]: I1124 09:10:33.753394 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" podUID="ceda8ef7-a576-4bb4-ab82-104436789689" containerName="dnsmasq-dns" containerID="cri-o://ae63f37f6a930ac2847432c1339085d1010c17fd258d014c25e01b39e0321844" gracePeriod=10 Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.030009 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-36b5-account-create-vwrrf" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.060103 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jx67j" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.109237 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntv5f\" (UniqueName: \"kubernetes.io/projected/3f6fad86-e72c-41c1-8322-614721929c2a-kube-api-access-ntv5f\") pod \"3f6fad86-e72c-41c1-8322-614721929c2a\" (UID: \"3f6fad86-e72c-41c1-8322-614721929c2a\") " Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.109607 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/362cf151-7819-46b5-9b25-2f42aa6370ac-operator-scripts\") pod \"362cf151-7819-46b5-9b25-2f42aa6370ac\" (UID: \"362cf151-7819-46b5-9b25-2f42aa6370ac\") " Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.109718 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh4bg\" (UniqueName: \"kubernetes.io/projected/362cf151-7819-46b5-9b25-2f42aa6370ac-kube-api-access-lh4bg\") pod \"362cf151-7819-46b5-9b25-2f42aa6370ac\" (UID: \"362cf151-7819-46b5-9b25-2f42aa6370ac\") " Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.109756 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f6fad86-e72c-41c1-8322-614721929c2a-operator-scripts\") pod \"3f6fad86-e72c-41c1-8322-614721929c2a\" (UID: \"3f6fad86-e72c-41c1-8322-614721929c2a\") " Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.110635 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f6fad86-e72c-41c1-8322-614721929c2a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f6fad86-e72c-41c1-8322-614721929c2a" (UID: "3f6fad86-e72c-41c1-8322-614721929c2a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.110998 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/362cf151-7819-46b5-9b25-2f42aa6370ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "362cf151-7819-46b5-9b25-2f42aa6370ac" (UID: "362cf151-7819-46b5-9b25-2f42aa6370ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.120010 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/362cf151-7819-46b5-9b25-2f42aa6370ac-kube-api-access-lh4bg" (OuterVolumeSpecName: "kube-api-access-lh4bg") pod "362cf151-7819-46b5-9b25-2f42aa6370ac" (UID: "362cf151-7819-46b5-9b25-2f42aa6370ac"). InnerVolumeSpecName "kube-api-access-lh4bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.133263 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f6fad86-e72c-41c1-8322-614721929c2a-kube-api-access-ntv5f" (OuterVolumeSpecName: "kube-api-access-ntv5f") pod "3f6fad86-e72c-41c1-8322-614721929c2a" (UID: "3f6fad86-e72c-41c1-8322-614721929c2a"). InnerVolumeSpecName "kube-api-access-ntv5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.211690 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh4bg\" (UniqueName: \"kubernetes.io/projected/362cf151-7819-46b5-9b25-2f42aa6370ac-kube-api-access-lh4bg\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.211714 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f6fad86-e72c-41c1-8322-614721929c2a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.211725 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntv5f\" (UniqueName: \"kubernetes.io/projected/3f6fad86-e72c-41c1-8322-614721929c2a-kube-api-access-ntv5f\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.211735 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/362cf151-7819-46b5-9b25-2f42aa6370ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.275190 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.312524 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-ovsdbserver-nb\") pod \"ceda8ef7-a576-4bb4-ab82-104436789689\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.312653 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-dns-svc\") pod \"ceda8ef7-a576-4bb4-ab82-104436789689\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.312694 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phrx4\" (UniqueName: \"kubernetes.io/projected/ceda8ef7-a576-4bb4-ab82-104436789689-kube-api-access-phrx4\") pod \"ceda8ef7-a576-4bb4-ab82-104436789689\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.312749 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-config\") pod \"ceda8ef7-a576-4bb4-ab82-104436789689\" (UID: \"ceda8ef7-a576-4bb4-ab82-104436789689\") " Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.323238 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceda8ef7-a576-4bb4-ab82-104436789689-kube-api-access-phrx4" (OuterVolumeSpecName: "kube-api-access-phrx4") pod "ceda8ef7-a576-4bb4-ab82-104436789689" (UID: "ceda8ef7-a576-4bb4-ab82-104436789689"). InnerVolumeSpecName "kube-api-access-phrx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.378526 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ceda8ef7-a576-4bb4-ab82-104436789689" (UID: "ceda8ef7-a576-4bb4-ab82-104436789689"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.383051 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ceda8ef7-a576-4bb4-ab82-104436789689" (UID: "ceda8ef7-a576-4bb4-ab82-104436789689"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.413144 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-config" (OuterVolumeSpecName: "config") pod "ceda8ef7-a576-4bb4-ab82-104436789689" (UID: "ceda8ef7-a576-4bb4-ab82-104436789689"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.414122 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.414206 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.414265 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phrx4\" (UniqueName: \"kubernetes.io/projected/ceda8ef7-a576-4bb4-ab82-104436789689-kube-api-access-phrx4\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.414321 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda8ef7-a576-4bb4-ab82-104436789689-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.561767 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.561911 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.562026 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.562842 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4aeeb69c1ab7122cad95da513920656c5e4ba5b3dd78419e124282e98483b06"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.563006 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://c4aeeb69c1ab7122cad95da513920656c5e4ba5b3dd78419e124282e98483b06" gracePeriod=600 Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.644442 4719 generic.go:334] "Generic (PLEG): container finished" podID="ceda8ef7-a576-4bb4-ab82-104436789689" containerID="ae63f37f6a930ac2847432c1339085d1010c17fd258d014c25e01b39e0321844" exitCode=0 Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.644507 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" event={"ID":"ceda8ef7-a576-4bb4-ab82-104436789689","Type":"ContainerDied","Data":"ae63f37f6a930ac2847432c1339085d1010c17fd258d014c25e01b39e0321844"} Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.644532 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" event={"ID":"ceda8ef7-a576-4bb4-ab82-104436789689","Type":"ContainerDied","Data":"fdb8fc89059070ee36b34d7ae48886eaa7cf85137a0f4264925bee944c9bf152"} Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.644548 4719 scope.go:117] "RemoveContainer" containerID="ae63f37f6a930ac2847432c1339085d1010c17fd258d014c25e01b39e0321844" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.644801 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-x6zhj" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.647390 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-36b5-account-create-vwrrf" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.647396 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-36b5-account-create-vwrrf" event={"ID":"3f6fad86-e72c-41c1-8322-614721929c2a","Type":"ContainerDied","Data":"d9715967b52c06b1fe267a6f79b07530a60fa30e64e3793106a95efc245ba405"} Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.647580 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9715967b52c06b1fe267a6f79b07530a60fa30e64e3793106a95efc245ba405" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.651602 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jx67j" event={"ID":"362cf151-7819-46b5-9b25-2f42aa6370ac","Type":"ContainerDied","Data":"89d865d4c578c139422d37f09f2ea29611f46863b4c569b4fd7e22b6e6a6fe21"} Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.651825 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89d865d4c578c139422d37f09f2ea29611f46863b4c569b4fd7e22b6e6a6fe21" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.651706 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jx67j" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.662730 4719 scope.go:117] "RemoveContainer" containerID="25741d6dc8d434939062f9c43f4a033bc8c99e4f03303cc4160852923ec64231" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.664528 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-x6zhj"] Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.680277 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-x6zhj"] Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.684107 4719 scope.go:117] "RemoveContainer" containerID="ae63f37f6a930ac2847432c1339085d1010c17fd258d014c25e01b39e0321844" Nov 24 09:10:34 crc kubenswrapper[4719]: E1124 09:10:34.684479 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae63f37f6a930ac2847432c1339085d1010c17fd258d014c25e01b39e0321844\": container with ID starting with ae63f37f6a930ac2847432c1339085d1010c17fd258d014c25e01b39e0321844 not found: ID does not exist" containerID="ae63f37f6a930ac2847432c1339085d1010c17fd258d014c25e01b39e0321844" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.684513 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae63f37f6a930ac2847432c1339085d1010c17fd258d014c25e01b39e0321844"} err="failed to get container status \"ae63f37f6a930ac2847432c1339085d1010c17fd258d014c25e01b39e0321844\": rpc error: code = NotFound desc = could not find container \"ae63f37f6a930ac2847432c1339085d1010c17fd258d014c25e01b39e0321844\": container with ID starting with ae63f37f6a930ac2847432c1339085d1010c17fd258d014c25e01b39e0321844 not found: ID does not exist" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.684532 4719 scope.go:117] "RemoveContainer" containerID="25741d6dc8d434939062f9c43f4a033bc8c99e4f03303cc4160852923ec64231" Nov 24 09:10:34 crc kubenswrapper[4719]: E1124 09:10:34.684775 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25741d6dc8d434939062f9c43f4a033bc8c99e4f03303cc4160852923ec64231\": container with ID starting with 25741d6dc8d434939062f9c43f4a033bc8c99e4f03303cc4160852923ec64231 not found: ID does not exist" containerID="25741d6dc8d434939062f9c43f4a033bc8c99e4f03303cc4160852923ec64231" Nov 24 09:10:34 crc kubenswrapper[4719]: I1124 09:10:34.684802 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25741d6dc8d434939062f9c43f4a033bc8c99e4f03303cc4160852923ec64231"} err="failed to get container status \"25741d6dc8d434939062f9c43f4a033bc8c99e4f03303cc4160852923ec64231\": rpc error: code = NotFound desc = could not find container \"25741d6dc8d434939062f9c43f4a033bc8c99e4f03303cc4160852923ec64231\": container with ID starting with 25741d6dc8d434939062f9c43f4a033bc8c99e4f03303cc4160852923ec64231 not found: ID does not exist" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.660926 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="c4aeeb69c1ab7122cad95da513920656c5e4ba5b3dd78419e124282e98483b06" exitCode=0 Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.660968 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"c4aeeb69c1ab7122cad95da513920656c5e4ba5b3dd78419e124282e98483b06"} Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.661449 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"abd7ce8489d65ccef4f15a6a456d72d66be28ce94d53032a08cda3487cfa7499"} Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.661469 4719 scope.go:117] "RemoveContainer" containerID="e9bafa1ff8cebfd6f7a09482f5227abe69557f213f9dda16fe6ddb7212992d3f" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.993482 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bp7gj"] Nov 24 09:10:35 crc kubenswrapper[4719]: E1124 09:10:35.994143 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e32e613-504a-4221-a5ea-29c4768e4ef9" containerName="mariadb-database-create" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.994234 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e32e613-504a-4221-a5ea-29c4768e4ef9" containerName="mariadb-database-create" Nov 24 09:10:35 crc kubenswrapper[4719]: E1124 09:10:35.994316 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9b95eb-5130-4d13-9557-fe979505e602" containerName="mariadb-account-create" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.994406 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9b95eb-5130-4d13-9557-fe979505e602" containerName="mariadb-account-create" Nov 24 09:10:35 crc kubenswrapper[4719]: E1124 09:10:35.994467 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230fc0d1-ff11-476a-82be-177f83a0e81f" containerName="mariadb-database-create" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.994520 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="230fc0d1-ff11-476a-82be-177f83a0e81f" containerName="mariadb-database-create" Nov 24 09:10:35 crc kubenswrapper[4719]: E1124 09:10:35.994584 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f6fad86-e72c-41c1-8322-614721929c2a" containerName="mariadb-account-create" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.994641 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f6fad86-e72c-41c1-8322-614721929c2a" containerName="mariadb-account-create" Nov 24 09:10:35 crc kubenswrapper[4719]: E1124 09:10:35.994796 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceda8ef7-a576-4bb4-ab82-104436789689" containerName="init" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.994853 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceda8ef7-a576-4bb4-ab82-104436789689" containerName="init" Nov 24 09:10:35 crc kubenswrapper[4719]: E1124 09:10:35.994919 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceda8ef7-a576-4bb4-ab82-104436789689" containerName="dnsmasq-dns" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.994975 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceda8ef7-a576-4bb4-ab82-104436789689" containerName="dnsmasq-dns" Nov 24 09:10:35 crc kubenswrapper[4719]: E1124 09:10:35.995031 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d86e1d-9f0f-4aec-a19c-a02a05a34319" containerName="mariadb-account-create" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.995115 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d86e1d-9f0f-4aec-a19c-a02a05a34319" containerName="mariadb-account-create" Nov 24 09:10:35 crc kubenswrapper[4719]: E1124 09:10:35.995195 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362cf151-7819-46b5-9b25-2f42aa6370ac" containerName="mariadb-database-create" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.995264 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="362cf151-7819-46b5-9b25-2f42aa6370ac" containerName="mariadb-database-create" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.995483 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e32e613-504a-4221-a5ea-29c4768e4ef9" containerName="mariadb-database-create" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.995558 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f6fad86-e72c-41c1-8322-614721929c2a" containerName="mariadb-account-create" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.995623 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceda8ef7-a576-4bb4-ab82-104436789689" containerName="dnsmasq-dns" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.995674 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="362cf151-7819-46b5-9b25-2f42aa6370ac" containerName="mariadb-database-create" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.995726 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="230fc0d1-ff11-476a-82be-177f83a0e81f" containerName="mariadb-database-create" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.995780 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9b95eb-5130-4d13-9557-fe979505e602" containerName="mariadb-account-create" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.995848 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d86e1d-9f0f-4aec-a19c-a02a05a34319" containerName="mariadb-account-create" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.996514 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:35 crc kubenswrapper[4719]: I1124 09:10:35.999279 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vwfrr" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.002150 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.038330 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bp7gj"] Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.039156 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-combined-ca-bundle\") pod \"glance-db-sync-bp7gj\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.039244 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-config-data\") pod \"glance-db-sync-bp7gj\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.039338 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdbxx\" (UniqueName: \"kubernetes.io/projected/614a41e1-aa75-4eff-818d-cd0686bc73b0-kube-api-access-wdbxx\") pod \"glance-db-sync-bp7gj\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.039398 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-db-sync-config-data\") pod \"glance-db-sync-bp7gj\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.141103 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-db-sync-config-data\") pod \"glance-db-sync-bp7gj\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.141274 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-combined-ca-bundle\") pod \"glance-db-sync-bp7gj\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.141348 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-config-data\") pod \"glance-db-sync-bp7gj\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.141472 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdbxx\" (UniqueName: \"kubernetes.io/projected/614a41e1-aa75-4eff-818d-cd0686bc73b0-kube-api-access-wdbxx\") pod \"glance-db-sync-bp7gj\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.148721 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-db-sync-config-data\") pod \"glance-db-sync-bp7gj\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.155454 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-combined-ca-bundle\") pod \"glance-db-sync-bp7gj\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.155547 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-config-data\") pod \"glance-db-sync-bp7gj\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.159817 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdbxx\" (UniqueName: \"kubernetes.io/projected/614a41e1-aa75-4eff-818d-cd0686bc73b0-kube-api-access-wdbxx\") pod \"glance-db-sync-bp7gj\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.313898 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.532010 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceda8ef7-a576-4bb4-ab82-104436789689" path="/var/lib/kubelet/pods/ceda8ef7-a576-4bb4-ab82-104436789689/volumes" Nov 24 09:10:36 crc kubenswrapper[4719]: I1124 09:10:36.856260 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bp7gj"] Nov 24 09:10:37 crc kubenswrapper[4719]: I1124 09:10:37.681888 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bp7gj" event={"ID":"614a41e1-aa75-4eff-818d-cd0686bc73b0","Type":"ContainerStarted","Data":"9db33ab8f6dca0d51eb166261e07544e36701cf300437a09ddb110628c57959a"} Nov 24 09:10:39 crc kubenswrapper[4719]: I1124 09:10:39.699023 4719 generic.go:334] "Generic (PLEG): container finished" podID="cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" containerID="c1771d592dc655e20add14c4d69f4bd51d88a3a286a78df17a6a37061d930f05" exitCode=0 Nov 24 09:10:39 crc kubenswrapper[4719]: I1124 09:10:39.699086 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b","Type":"ContainerDied","Data":"c1771d592dc655e20add14c4d69f4bd51d88a3a286a78df17a6a37061d930f05"} Nov 24 09:10:39 crc kubenswrapper[4719]: I1124 09:10:39.701189 4719 generic.go:334] "Generic (PLEG): container finished" podID="957bbc3c-6b1d-403a-a49d-6bafef454a48" containerID="4a8f0a27407ab24ce981574879de37159dee198c2d64e89bb371cc30f0da695d" exitCode=0 Nov 24 09:10:39 crc kubenswrapper[4719]: I1124 09:10:39.701212 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"957bbc3c-6b1d-403a-a49d-6bafef454a48","Type":"ContainerDied","Data":"4a8f0a27407ab24ce981574879de37159dee198c2d64e89bb371cc30f0da695d"} Nov 24 09:10:40 crc kubenswrapper[4719]: I1124 09:10:40.710429 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b","Type":"ContainerStarted","Data":"2025843f2b064751ea899f09fe18be435456181a7e51578d4179a023c9c5285a"} Nov 24 09:10:40 crc kubenswrapper[4719]: I1124 09:10:40.711216 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 09:10:40 crc kubenswrapper[4719]: I1124 09:10:40.712631 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"957bbc3c-6b1d-403a-a49d-6bafef454a48","Type":"ContainerStarted","Data":"8df49addefbb8d825f4d026208660434fff78a4016a2ed3403c96c97c46768b5"} Nov 24 09:10:40 crc kubenswrapper[4719]: I1124 09:10:40.712838 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:10:40 crc kubenswrapper[4719]: I1124 09:10:40.744829 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.407363378 podStartE2EDuration="1m0.744810411s" podCreationTimestamp="2025-11-24 09:09:40 +0000 UTC" firstStartedPulling="2025-11-24 09:09:42.84773218 +0000 UTC m=+959.179005432" lastFinishedPulling="2025-11-24 09:10:06.185179213 +0000 UTC m=+982.516452465" observedRunningTime="2025-11-24 09:10:40.735496642 +0000 UTC m=+1017.066769904" watchObservedRunningTime="2025-11-24 09:10:40.744810411 +0000 UTC m=+1017.076083663" Nov 24 09:10:40 crc kubenswrapper[4719]: I1124 09:10:40.776592 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.828574771 podStartE2EDuration="59.776569379s" podCreationTimestamp="2025-11-24 09:09:41 +0000 UTC" firstStartedPulling="2025-11-24 09:09:43.244673171 +0000 UTC m=+959.575946423" lastFinishedPulling="2025-11-24 09:10:06.192667779 +0000 UTC m=+982.523941031" observedRunningTime="2025-11-24 09:10:40.77000939 +0000 UTC m=+1017.101282662" watchObservedRunningTime="2025-11-24 09:10:40.776569379 +0000 UTC m=+1017.107842641" Nov 24 09:10:43 crc kubenswrapper[4719]: I1124 09:10:43.796444 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 09:10:45 crc kubenswrapper[4719]: I1124 09:10:45.872340 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ccf6d" podUID="225b57e5-7f49-4b51-87db-6c790f23bf6e" containerName="ovn-controller" probeResult="failure" output=< Nov 24 09:10:45 crc kubenswrapper[4719]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 09:10:45 crc kubenswrapper[4719]: > Nov 24 09:10:45 crc kubenswrapper[4719]: I1124 09:10:45.908369 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:10:45 crc kubenswrapper[4719]: I1124 09:10:45.909240 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bk9qz" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.124758 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ccf6d-config-rtjbb"] Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.126306 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.131118 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.141981 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ccf6d-config-rtjbb"] Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.234144 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-additional-scripts\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.234225 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-scripts\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.234267 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-run\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.234287 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtmj4\" (UniqueName: \"kubernetes.io/projected/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-kube-api-access-gtmj4\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.234324 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-log-ovn\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.234356 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-run-ovn\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.336102 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-log-ovn\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.336173 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-run-ovn\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.336215 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-additional-scripts\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.336303 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-scripts\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.336342 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-log-ovn\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.336357 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-run\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.336367 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-run-ovn\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.336381 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtmj4\" (UniqueName: \"kubernetes.io/projected/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-kube-api-access-gtmj4\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.336613 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-run\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.337421 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-additional-scripts\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.339198 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-scripts\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.357748 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtmj4\" (UniqueName: \"kubernetes.io/projected/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-kube-api-access-gtmj4\") pod \"ovn-controller-ccf6d-config-rtjbb\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:46 crc kubenswrapper[4719]: I1124 09:10:46.450158 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:48 crc kubenswrapper[4719]: I1124 09:10:48.636733 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ccf6d-config-rtjbb"] Nov 24 09:10:48 crc kubenswrapper[4719]: I1124 09:10:48.779480 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ccf6d-config-rtjbb" event={"ID":"8bd1624c-9ed9-4290-b5cf-6b188c6b6830","Type":"ContainerStarted","Data":"ed96b4a1768acd6cc44e0aa31b2dfd438068846159d2da4a6702752e2f5beed0"} Nov 24 09:10:49 crc kubenswrapper[4719]: I1124 09:10:49.787904 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bp7gj" event={"ID":"614a41e1-aa75-4eff-818d-cd0686bc73b0","Type":"ContainerStarted","Data":"31e009c359a2805feace323364cd3fc336cfdaa32b8b6cdfd630de3f46e13e8e"} Nov 24 09:10:49 crc kubenswrapper[4719]: I1124 09:10:49.790284 4719 generic.go:334] "Generic (PLEG): container finished" podID="8bd1624c-9ed9-4290-b5cf-6b188c6b6830" containerID="898dcf6011ae7ed2019f157bb57e4f2cdd36e59aa4b69e0daa5c20223a54c457" exitCode=0 Nov 24 09:10:49 crc kubenswrapper[4719]: I1124 09:10:49.790321 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ccf6d-config-rtjbb" event={"ID":"8bd1624c-9ed9-4290-b5cf-6b188c6b6830","Type":"ContainerDied","Data":"898dcf6011ae7ed2019f157bb57e4f2cdd36e59aa4b69e0daa5c20223a54c457"} Nov 24 09:10:49 crc kubenswrapper[4719]: I1124 09:10:49.811742 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bp7gj" podStartSLOduration=3.197057341 podStartE2EDuration="14.811717682s" podCreationTimestamp="2025-11-24 09:10:35 +0000 UTC" firstStartedPulling="2025-11-24 09:10:36.863862197 +0000 UTC m=+1013.195135459" lastFinishedPulling="2025-11-24 09:10:48.478522558 +0000 UTC m=+1024.809795800" observedRunningTime="2025-11-24 09:10:49.803901376 +0000 UTC m=+1026.135174628" watchObservedRunningTime="2025-11-24 09:10:49.811717682 +0000 UTC m=+1026.142990934" Nov 24 09:10:50 crc kubenswrapper[4719]: I1124 09:10:50.887895 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ccf6d" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.160288 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.222196 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-run-ovn\") pod \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.222277 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtmj4\" (UniqueName: \"kubernetes.io/projected/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-kube-api-access-gtmj4\") pod \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.222313 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-log-ovn\") pod \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.222451 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-additional-scripts\") pod \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.222487 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-scripts\") pod \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.222518 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-run\") pod \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\" (UID: \"8bd1624c-9ed9-4290-b5cf-6b188c6b6830\") " Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.222659 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8bd1624c-9ed9-4290-b5cf-6b188c6b6830" (UID: "8bd1624c-9ed9-4290-b5cf-6b188c6b6830"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.222789 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-run" (OuterVolumeSpecName: "var-run") pod "8bd1624c-9ed9-4290-b5cf-6b188c6b6830" (UID: "8bd1624c-9ed9-4290-b5cf-6b188c6b6830"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.222887 4719 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.222908 4719 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.223272 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8bd1624c-9ed9-4290-b5cf-6b188c6b6830" (UID: "8bd1624c-9ed9-4290-b5cf-6b188c6b6830"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.223642 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-scripts" (OuterVolumeSpecName: "scripts") pod "8bd1624c-9ed9-4290-b5cf-6b188c6b6830" (UID: "8bd1624c-9ed9-4290-b5cf-6b188c6b6830"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.222651 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8bd1624c-9ed9-4290-b5cf-6b188c6b6830" (UID: "8bd1624c-9ed9-4290-b5cf-6b188c6b6830"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.228263 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-kube-api-access-gtmj4" (OuterVolumeSpecName: "kube-api-access-gtmj4") pod "8bd1624c-9ed9-4290-b5cf-6b188c6b6830" (UID: "8bd1624c-9ed9-4290-b5cf-6b188c6b6830"). InnerVolumeSpecName "kube-api-access-gtmj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.324801 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtmj4\" (UniqueName: \"kubernetes.io/projected/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-kube-api-access-gtmj4\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.325151 4719 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.325228 4719 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.325286 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bd1624c-9ed9-4290-b5cf-6b188c6b6830-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.806952 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ccf6d-config-rtjbb" event={"ID":"8bd1624c-9ed9-4290-b5cf-6b188c6b6830","Type":"ContainerDied","Data":"ed96b4a1768acd6cc44e0aa31b2dfd438068846159d2da4a6702752e2f5beed0"} Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.807248 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed96b4a1768acd6cc44e0aa31b2dfd438068846159d2da4a6702752e2f5beed0" Nov 24 09:10:51 crc kubenswrapper[4719]: I1124 09:10:51.807029 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ccf6d-config-rtjbb" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.024380 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.279626 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ccf6d-config-rtjbb"] Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.294090 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ccf6d-config-rtjbb"] Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.380503 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ccf6d-config-6cbwv"] Nov 24 09:10:52 crc kubenswrapper[4719]: E1124 09:10:52.380827 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd1624c-9ed9-4290-b5cf-6b188c6b6830" containerName="ovn-config" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.380843 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd1624c-9ed9-4290-b5cf-6b188c6b6830" containerName="ovn-config" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.381025 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bd1624c-9ed9-4290-b5cf-6b188c6b6830" containerName="ovn-config" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.381606 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.386919 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.399579 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ccf6d-config-6cbwv"] Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.442293 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb170523-521b-4027-b015-8f5711ea299d-additional-scripts\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.442595 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb170523-521b-4027-b015-8f5711ea299d-scripts\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.442686 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-run\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.442755 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vqbv\" (UniqueName: \"kubernetes.io/projected/bb170523-521b-4027-b015-8f5711ea299d-kube-api-access-7vqbv\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.442840 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-run-ovn\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.442961 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-log-ovn\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.517407 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="957bbc3c-6b1d-403a-a49d-6bafef454a48" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.529896 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bd1624c-9ed9-4290-b5cf-6b188c6b6830" path="/var/lib/kubelet/pods/8bd1624c-9ed9-4290-b5cf-6b188c6b6830/volumes" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.544309 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb170523-521b-4027-b015-8f5711ea299d-additional-scripts\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.544351 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb170523-521b-4027-b015-8f5711ea299d-scripts\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.544388 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-run\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.544403 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vqbv\" (UniqueName: \"kubernetes.io/projected/bb170523-521b-4027-b015-8f5711ea299d-kube-api-access-7vqbv\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.544436 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-run-ovn\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.544497 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-log-ovn\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.544772 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-run\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.545148 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-log-ovn\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.545190 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-run-ovn\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.545806 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb170523-521b-4027-b015-8f5711ea299d-additional-scripts\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.546743 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb170523-521b-4027-b015-8f5711ea299d-scripts\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.575625 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vqbv\" (UniqueName: \"kubernetes.io/projected/bb170523-521b-4027-b015-8f5711ea299d-kube-api-access-7vqbv\") pod \"ovn-controller-ccf6d-config-6cbwv\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:52 crc kubenswrapper[4719]: I1124 09:10:52.696001 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:53 crc kubenswrapper[4719]: W1124 09:10:53.192692 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb170523_521b_4027_b015_8f5711ea299d.slice/crio-e0053f787215208ec4584f563132a6e6e6ed9e5f44abd2eed3918c6d43e238f4 WatchSource:0}: Error finding container e0053f787215208ec4584f563132a6e6e6ed9e5f44abd2eed3918c6d43e238f4: Status 404 returned error can't find the container with id e0053f787215208ec4584f563132a6e6e6ed9e5f44abd2eed3918c6d43e238f4 Nov 24 09:10:53 crc kubenswrapper[4719]: I1124 09:10:53.200430 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ccf6d-config-6cbwv"] Nov 24 09:10:53 crc kubenswrapper[4719]: I1124 09:10:53.827612 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ccf6d-config-6cbwv" event={"ID":"bb170523-521b-4027-b015-8f5711ea299d","Type":"ContainerStarted","Data":"8868003a1fe41de35e9e1da9657efd5cab96f315287562ff679bd74ca1e575b0"} Nov 24 09:10:53 crc kubenswrapper[4719]: I1124 09:10:53.827950 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ccf6d-config-6cbwv" event={"ID":"bb170523-521b-4027-b015-8f5711ea299d","Type":"ContainerStarted","Data":"e0053f787215208ec4584f563132a6e6e6ed9e5f44abd2eed3918c6d43e238f4"} Nov 24 09:10:53 crc kubenswrapper[4719]: I1124 09:10:53.845226 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ccf6d-config-6cbwv" podStartSLOduration=1.845185924 podStartE2EDuration="1.845185924s" podCreationTimestamp="2025-11-24 09:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:10:53.843492375 +0000 UTC m=+1030.174765627" watchObservedRunningTime="2025-11-24 09:10:53.845185924 +0000 UTC m=+1030.176459186" Nov 24 09:10:54 crc kubenswrapper[4719]: I1124 09:10:54.837491 4719 generic.go:334] "Generic (PLEG): container finished" podID="bb170523-521b-4027-b015-8f5711ea299d" containerID="8868003a1fe41de35e9e1da9657efd5cab96f315287562ff679bd74ca1e575b0" exitCode=0 Nov 24 09:10:54 crc kubenswrapper[4719]: I1124 09:10:54.837563 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ccf6d-config-6cbwv" event={"ID":"bb170523-521b-4027-b015-8f5711ea299d","Type":"ContainerDied","Data":"8868003a1fe41de35e9e1da9657efd5cab96f315287562ff679bd74ca1e575b0"} Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.148600 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.218515 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb170523-521b-4027-b015-8f5711ea299d-additional-scripts\") pod \"bb170523-521b-4027-b015-8f5711ea299d\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.218591 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-log-ovn\") pod \"bb170523-521b-4027-b015-8f5711ea299d\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.218667 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb170523-521b-4027-b015-8f5711ea299d-scripts\") pod \"bb170523-521b-4027-b015-8f5711ea299d\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.219637 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-run-ovn\") pod \"bb170523-521b-4027-b015-8f5711ea299d\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.218711 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bb170523-521b-4027-b015-8f5711ea299d" (UID: "bb170523-521b-4027-b015-8f5711ea299d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.219398 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb170523-521b-4027-b015-8f5711ea299d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bb170523-521b-4027-b015-8f5711ea299d" (UID: "bb170523-521b-4027-b015-8f5711ea299d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.219577 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb170523-521b-4027-b015-8f5711ea299d-scripts" (OuterVolumeSpecName: "scripts") pod "bb170523-521b-4027-b015-8f5711ea299d" (UID: "bb170523-521b-4027-b015-8f5711ea299d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.219727 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vqbv\" (UniqueName: \"kubernetes.io/projected/bb170523-521b-4027-b015-8f5711ea299d-kube-api-access-7vqbv\") pod \"bb170523-521b-4027-b015-8f5711ea299d\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.219765 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bb170523-521b-4027-b015-8f5711ea299d" (UID: "bb170523-521b-4027-b015-8f5711ea299d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.219813 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-run\") pod \"bb170523-521b-4027-b015-8f5711ea299d\" (UID: \"bb170523-521b-4027-b015-8f5711ea299d\") " Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.219910 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-run" (OuterVolumeSpecName: "var-run") pod "bb170523-521b-4027-b015-8f5711ea299d" (UID: "bb170523-521b-4027-b015-8f5711ea299d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.220180 4719 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.220194 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb170523-521b-4027-b015-8f5711ea299d-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.220202 4719 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.220209 4719 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb170523-521b-4027-b015-8f5711ea299d-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.220216 4719 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb170523-521b-4027-b015-8f5711ea299d-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.225201 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb170523-521b-4027-b015-8f5711ea299d-kube-api-access-7vqbv" (OuterVolumeSpecName: "kube-api-access-7vqbv") pod "bb170523-521b-4027-b015-8f5711ea299d" (UID: "bb170523-521b-4027-b015-8f5711ea299d"). InnerVolumeSpecName "kube-api-access-7vqbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.321397 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vqbv\" (UniqueName: \"kubernetes.io/projected/bb170523-521b-4027-b015-8f5711ea299d-kube-api-access-7vqbv\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.862910 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ccf6d-config-6cbwv" event={"ID":"bb170523-521b-4027-b015-8f5711ea299d","Type":"ContainerDied","Data":"e0053f787215208ec4584f563132a6e6e6ed9e5f44abd2eed3918c6d43e238f4"} Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.862947 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0053f787215208ec4584f563132a6e6e6ed9e5f44abd2eed3918c6d43e238f4" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.862960 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ccf6d-config-6cbwv" Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.927724 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ccf6d-config-6cbwv"] Nov 24 09:10:56 crc kubenswrapper[4719]: I1124 09:10:56.932964 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ccf6d-config-6cbwv"] Nov 24 09:10:57 crc kubenswrapper[4719]: I1124 09:10:57.875614 4719 generic.go:334] "Generic (PLEG): container finished" podID="614a41e1-aa75-4eff-818d-cd0686bc73b0" containerID="31e009c359a2805feace323364cd3fc336cfdaa32b8b6cdfd630de3f46e13e8e" exitCode=0 Nov 24 09:10:57 crc kubenswrapper[4719]: I1124 09:10:57.875723 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bp7gj" event={"ID":"614a41e1-aa75-4eff-818d-cd0686bc73b0","Type":"ContainerDied","Data":"31e009c359a2805feace323364cd3fc336cfdaa32b8b6cdfd630de3f46e13e8e"} Nov 24 09:10:58 crc kubenswrapper[4719]: I1124 09:10:58.535283 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb170523-521b-4027-b015-8f5711ea299d" path="/var/lib/kubelet/pods/bb170523-521b-4027-b015-8f5711ea299d/volumes" Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.233476 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bp7gj" Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.390021 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-combined-ca-bundle\") pod \"614a41e1-aa75-4eff-818d-cd0686bc73b0\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.390135 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdbxx\" (UniqueName: \"kubernetes.io/projected/614a41e1-aa75-4eff-818d-cd0686bc73b0-kube-api-access-wdbxx\") pod \"614a41e1-aa75-4eff-818d-cd0686bc73b0\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.390224 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-db-sync-config-data\") pod \"614a41e1-aa75-4eff-818d-cd0686bc73b0\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.390253 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-config-data\") pod \"614a41e1-aa75-4eff-818d-cd0686bc73b0\" (UID: \"614a41e1-aa75-4eff-818d-cd0686bc73b0\") " Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.395691 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "614a41e1-aa75-4eff-818d-cd0686bc73b0" (UID: "614a41e1-aa75-4eff-818d-cd0686bc73b0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.402406 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/614a41e1-aa75-4eff-818d-cd0686bc73b0-kube-api-access-wdbxx" (OuterVolumeSpecName: "kube-api-access-wdbxx") pod "614a41e1-aa75-4eff-818d-cd0686bc73b0" (UID: "614a41e1-aa75-4eff-818d-cd0686bc73b0"). InnerVolumeSpecName "kube-api-access-wdbxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.411993 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "614a41e1-aa75-4eff-818d-cd0686bc73b0" (UID: "614a41e1-aa75-4eff-818d-cd0686bc73b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.426991 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-config-data" (OuterVolumeSpecName: "config-data") pod "614a41e1-aa75-4eff-818d-cd0686bc73b0" (UID: "614a41e1-aa75-4eff-818d-cd0686bc73b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.492020 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.492074 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdbxx\" (UniqueName: \"kubernetes.io/projected/614a41e1-aa75-4eff-818d-cd0686bc73b0-kube-api-access-wdbxx\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.492088 4719 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.492099 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/614a41e1-aa75-4eff-818d-cd0686bc73b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.893780 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bp7gj" event={"ID":"614a41e1-aa75-4eff-818d-cd0686bc73b0","Type":"ContainerDied","Data":"9db33ab8f6dca0d51eb166261e07544e36701cf300437a09ddb110628c57959a"} Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.894087 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9db33ab8f6dca0d51eb166261e07544e36701cf300437a09ddb110628c57959a" Nov 24 09:10:59 crc kubenswrapper[4719]: I1124 09:10:59.893854 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bp7gj" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.340898 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-znrbz"] Nov 24 09:11:00 crc kubenswrapper[4719]: E1124 09:11:00.341219 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb170523-521b-4027-b015-8f5711ea299d" containerName="ovn-config" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.341230 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb170523-521b-4027-b015-8f5711ea299d" containerName="ovn-config" Nov 24 09:11:00 crc kubenswrapper[4719]: E1124 09:11:00.341247 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614a41e1-aa75-4eff-818d-cd0686bc73b0" containerName="glance-db-sync" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.341253 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="614a41e1-aa75-4eff-818d-cd0686bc73b0" containerName="glance-db-sync" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.341418 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="614a41e1-aa75-4eff-818d-cd0686bc73b0" containerName="glance-db-sync" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.341443 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb170523-521b-4027-b015-8f5711ea299d" containerName="ovn-config" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.342165 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.360961 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-znrbz"] Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.507542 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.507816 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-config\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.507861 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll9zz\" (UniqueName: \"kubernetes.io/projected/dfd6020d-d20f-434a-8a51-b78a86354104-kube-api-access-ll9zz\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.507895 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-dns-svc\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.507910 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.608901 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.608997 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.609071 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-config\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.609153 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll9zz\" (UniqueName: \"kubernetes.io/projected/dfd6020d-d20f-434a-8a51-b78a86354104-kube-api-access-ll9zz\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.609213 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-dns-svc\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.609877 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.609892 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.610022 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-dns-svc\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.610059 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-config\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.639802 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll9zz\" (UniqueName: \"kubernetes.io/projected/dfd6020d-d20f-434a-8a51-b78a86354104-kube-api-access-ll9zz\") pod \"dnsmasq-dns-554567b4f7-znrbz\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:00 crc kubenswrapper[4719]: I1124 09:11:00.708220 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:01 crc kubenswrapper[4719]: W1124 09:11:01.204904 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfd6020d_d20f_434a_8a51_b78a86354104.slice/crio-76efd6987bad0ad1cd681e1ba728c0acae7cce3d231e426a86e08cad07d26696 WatchSource:0}: Error finding container 76efd6987bad0ad1cd681e1ba728c0acae7cce3d231e426a86e08cad07d26696: Status 404 returned error can't find the container with id 76efd6987bad0ad1cd681e1ba728c0acae7cce3d231e426a86e08cad07d26696 Nov 24 09:11:01 crc kubenswrapper[4719]: I1124 09:11:01.205294 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-znrbz"] Nov 24 09:11:01 crc kubenswrapper[4719]: I1124 09:11:01.943781 4719 generic.go:334] "Generic (PLEG): container finished" podID="dfd6020d-d20f-434a-8a51-b78a86354104" containerID="918e316484ce13cb4b45ce3779c1a4fec3b63bbe68c5f787868b52b4f824bf5e" exitCode=0 Nov 24 09:11:01 crc kubenswrapper[4719]: I1124 09:11:01.943884 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" event={"ID":"dfd6020d-d20f-434a-8a51-b78a86354104","Type":"ContainerDied","Data":"918e316484ce13cb4b45ce3779c1a4fec3b63bbe68c5f787868b52b4f824bf5e"} Nov 24 09:11:01 crc kubenswrapper[4719]: I1124 09:11:01.944060 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" event={"ID":"dfd6020d-d20f-434a-8a51-b78a86354104","Type":"ContainerStarted","Data":"76efd6987bad0ad1cd681e1ba728c0acae7cce3d231e426a86e08cad07d26696"} Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.023461 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.373932 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xtx8w"] Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.375237 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xtx8w" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.393459 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xtx8w"] Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.514873 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-b51e-account-create-chwq5"] Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.515822 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b51e-account-create-chwq5" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.519220 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.526340 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.538313 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw8d2\" (UniqueName: \"kubernetes.io/projected/be421b32-1776-4720-b49e-0188e6cbad0f-kube-api-access-fw8d2\") pod \"barbican-db-create-xtx8w\" (UID: \"be421b32-1776-4720-b49e-0188e6cbad0f\") " pod="openstack/barbican-db-create-xtx8w" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.538392 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be421b32-1776-4720-b49e-0188e6cbad0f-operator-scripts\") pod \"barbican-db-create-xtx8w\" (UID: \"be421b32-1776-4720-b49e-0188e6cbad0f\") " pod="openstack/barbican-db-create-xtx8w" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.569437 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b51e-account-create-chwq5"] Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.603525 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-kknhq"] Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.604568 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kknhq" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.624369 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kknhq"] Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.640858 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28882cd2-f05b-4e9a-8e96-1c49236337db-operator-scripts\") pod \"barbican-b51e-account-create-chwq5\" (UID: \"28882cd2-f05b-4e9a-8e96-1c49236337db\") " pod="openstack/barbican-b51e-account-create-chwq5" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.640909 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8d2\" (UniqueName: \"kubernetes.io/projected/be421b32-1776-4720-b49e-0188e6cbad0f-kube-api-access-fw8d2\") pod \"barbican-db-create-xtx8w\" (UID: \"be421b32-1776-4720-b49e-0188e6cbad0f\") " pod="openstack/barbican-db-create-xtx8w" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.640935 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be421b32-1776-4720-b49e-0188e6cbad0f-operator-scripts\") pod \"barbican-db-create-xtx8w\" (UID: \"be421b32-1776-4720-b49e-0188e6cbad0f\") " pod="openstack/barbican-db-create-xtx8w" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.641055 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8662g\" (UniqueName: \"kubernetes.io/projected/28882cd2-f05b-4e9a-8e96-1c49236337db-kube-api-access-8662g\") pod \"barbican-b51e-account-create-chwq5\" (UID: \"28882cd2-f05b-4e9a-8e96-1c49236337db\") " pod="openstack/barbican-b51e-account-create-chwq5" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.642027 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be421b32-1776-4720-b49e-0188e6cbad0f-operator-scripts\") pod \"barbican-db-create-xtx8w\" (UID: \"be421b32-1776-4720-b49e-0188e6cbad0f\") " pod="openstack/barbican-db-create-xtx8w" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.699928 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8d2\" (UniqueName: \"kubernetes.io/projected/be421b32-1776-4720-b49e-0188e6cbad0f-kube-api-access-fw8d2\") pod \"barbican-db-create-xtx8w\" (UID: \"be421b32-1776-4720-b49e-0188e6cbad0f\") " pod="openstack/barbican-db-create-xtx8w" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.742593 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28882cd2-f05b-4e9a-8e96-1c49236337db-operator-scripts\") pod \"barbican-b51e-account-create-chwq5\" (UID: \"28882cd2-f05b-4e9a-8e96-1c49236337db\") " pod="openstack/barbican-b51e-account-create-chwq5" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.742656 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e008dc82-a46e-4cb3-b2c7-d05598f51373-operator-scripts\") pod \"cinder-db-create-kknhq\" (UID: \"e008dc82-a46e-4cb3-b2c7-d05598f51373\") " pod="openstack/cinder-db-create-kknhq" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.742679 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc8c6\" (UniqueName: \"kubernetes.io/projected/e008dc82-a46e-4cb3-b2c7-d05598f51373-kube-api-access-bc8c6\") pod \"cinder-db-create-kknhq\" (UID: \"e008dc82-a46e-4cb3-b2c7-d05598f51373\") " pod="openstack/cinder-db-create-kknhq" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.742741 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8662g\" (UniqueName: \"kubernetes.io/projected/28882cd2-f05b-4e9a-8e96-1c49236337db-kube-api-access-8662g\") pod \"barbican-b51e-account-create-chwq5\" (UID: \"28882cd2-f05b-4e9a-8e96-1c49236337db\") " pod="openstack/barbican-b51e-account-create-chwq5" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.743333 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28882cd2-f05b-4e9a-8e96-1c49236337db-operator-scripts\") pod \"barbican-b51e-account-create-chwq5\" (UID: \"28882cd2-f05b-4e9a-8e96-1c49236337db\") " pod="openstack/barbican-b51e-account-create-chwq5" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.787575 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8662g\" (UniqueName: \"kubernetes.io/projected/28882cd2-f05b-4e9a-8e96-1c49236337db-kube-api-access-8662g\") pod \"barbican-b51e-account-create-chwq5\" (UID: \"28882cd2-f05b-4e9a-8e96-1c49236337db\") " pod="openstack/barbican-b51e-account-create-chwq5" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.830395 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b51e-account-create-chwq5" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.843930 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e008dc82-a46e-4cb3-b2c7-d05598f51373-operator-scripts\") pod \"cinder-db-create-kknhq\" (UID: \"e008dc82-a46e-4cb3-b2c7-d05598f51373\") " pod="openstack/cinder-db-create-kknhq" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.843972 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc8c6\" (UniqueName: \"kubernetes.io/projected/e008dc82-a46e-4cb3-b2c7-d05598f51373-kube-api-access-bc8c6\") pod \"cinder-db-create-kknhq\" (UID: \"e008dc82-a46e-4cb3-b2c7-d05598f51373\") " pod="openstack/cinder-db-create-kknhq" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.844941 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e008dc82-a46e-4cb3-b2c7-d05598f51373-operator-scripts\") pod \"cinder-db-create-kknhq\" (UID: \"e008dc82-a46e-4cb3-b2c7-d05598f51373\") " pod="openstack/cinder-db-create-kknhq" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.880836 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc8c6\" (UniqueName: \"kubernetes.io/projected/e008dc82-a46e-4cb3-b2c7-d05598f51373-kube-api-access-bc8c6\") pod \"cinder-db-create-kknhq\" (UID: \"e008dc82-a46e-4cb3-b2c7-d05598f51373\") " pod="openstack/cinder-db-create-kknhq" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.892698 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-x4kkz"] Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.893654 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x4kkz" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.899584 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x4kkz"] Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.919215 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kknhq" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.939091 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-488f-account-create-zckr4"] Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.940052 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-488f-account-create-zckr4" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.946280 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 09:11:02 crc kubenswrapper[4719]: I1124 09:11:02.983938 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-488f-account-create-zckr4"] Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.000758 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xtx8w" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.006372 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" event={"ID":"dfd6020d-d20f-434a-8a51-b78a86354104","Type":"ContainerStarted","Data":"9659d04e5ade151792fe5fa78bf7054a8fdfa214040d322feb16a96e8f819816"} Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.006785 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.048277 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4c080d6-f9b4-42d9-a09c-efad1904b2cf-operator-scripts\") pod \"neutron-db-create-x4kkz\" (UID: \"a4c080d6-f9b4-42d9-a09c-efad1904b2cf\") " pod="openstack/neutron-db-create-x4kkz" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.048326 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dgbh\" (UniqueName: \"kubernetes.io/projected/a4c080d6-f9b4-42d9-a09c-efad1904b2cf-kube-api-access-2dgbh\") pod \"neutron-db-create-x4kkz\" (UID: \"a4c080d6-f9b4-42d9-a09c-efad1904b2cf\") " pod="openstack/neutron-db-create-x4kkz" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.048373 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpslf\" (UniqueName: \"kubernetes.io/projected/5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b-kube-api-access-qpslf\") pod \"cinder-488f-account-create-zckr4\" (UID: \"5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b\") " pod="openstack/cinder-488f-account-create-zckr4" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.048413 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b-operator-scripts\") pod \"cinder-488f-account-create-zckr4\" (UID: \"5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b\") " pod="openstack/cinder-488f-account-create-zckr4" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.094289 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" podStartSLOduration=3.09427179 podStartE2EDuration="3.09427179s" podCreationTimestamp="2025-11-24 09:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:03.07564438 +0000 UTC m=+1039.406917632" watchObservedRunningTime="2025-11-24 09:11:03.09427179 +0000 UTC m=+1039.425545042" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.099467 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-bx8fg"] Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.104667 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bx8fg" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.116525 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.116726 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.116844 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-d4gqc" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.116945 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.123602 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bx8fg"] Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.150375 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxgkv\" (UniqueName: \"kubernetes.io/projected/16010248-d22e-4551-a3ba-f8b61f6ae440-kube-api-access-qxgkv\") pod \"keystone-db-sync-bx8fg\" (UID: \"16010248-d22e-4551-a3ba-f8b61f6ae440\") " pod="openstack/keystone-db-sync-bx8fg" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.150438 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpslf\" (UniqueName: \"kubernetes.io/projected/5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b-kube-api-access-qpslf\") pod \"cinder-488f-account-create-zckr4\" (UID: \"5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b\") " pod="openstack/cinder-488f-account-create-zckr4" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.150503 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b-operator-scripts\") pod \"cinder-488f-account-create-zckr4\" (UID: \"5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b\") " pod="openstack/cinder-488f-account-create-zckr4" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.150529 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16010248-d22e-4551-a3ba-f8b61f6ae440-config-data\") pod \"keystone-db-sync-bx8fg\" (UID: \"16010248-d22e-4551-a3ba-f8b61f6ae440\") " pod="openstack/keystone-db-sync-bx8fg" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.150587 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16010248-d22e-4551-a3ba-f8b61f6ae440-combined-ca-bundle\") pod \"keystone-db-sync-bx8fg\" (UID: \"16010248-d22e-4551-a3ba-f8b61f6ae440\") " pod="openstack/keystone-db-sync-bx8fg" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.150608 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4c080d6-f9b4-42d9-a09c-efad1904b2cf-operator-scripts\") pod \"neutron-db-create-x4kkz\" (UID: \"a4c080d6-f9b4-42d9-a09c-efad1904b2cf\") " pod="openstack/neutron-db-create-x4kkz" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.150644 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dgbh\" (UniqueName: \"kubernetes.io/projected/a4c080d6-f9b4-42d9-a09c-efad1904b2cf-kube-api-access-2dgbh\") pod \"neutron-db-create-x4kkz\" (UID: \"a4c080d6-f9b4-42d9-a09c-efad1904b2cf\") " pod="openstack/neutron-db-create-x4kkz" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.164855 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b-operator-scripts\") pod \"cinder-488f-account-create-zckr4\" (UID: \"5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b\") " pod="openstack/cinder-488f-account-create-zckr4" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.165520 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4c080d6-f9b4-42d9-a09c-efad1904b2cf-operator-scripts\") pod \"neutron-db-create-x4kkz\" (UID: \"a4c080d6-f9b4-42d9-a09c-efad1904b2cf\") " pod="openstack/neutron-db-create-x4kkz" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.240283 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dgbh\" (UniqueName: \"kubernetes.io/projected/a4c080d6-f9b4-42d9-a09c-efad1904b2cf-kube-api-access-2dgbh\") pod \"neutron-db-create-x4kkz\" (UID: \"a4c080d6-f9b4-42d9-a09c-efad1904b2cf\") " pod="openstack/neutron-db-create-x4kkz" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.249585 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpslf\" (UniqueName: \"kubernetes.io/projected/5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b-kube-api-access-qpslf\") pod \"cinder-488f-account-create-zckr4\" (UID: \"5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b\") " pod="openstack/cinder-488f-account-create-zckr4" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.249654 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0ea0-account-create-ckhf9"] Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.263498 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16010248-d22e-4551-a3ba-f8b61f6ae440-combined-ca-bundle\") pod \"keystone-db-sync-bx8fg\" (UID: \"16010248-d22e-4551-a3ba-f8b61f6ae440\") " pod="openstack/keystone-db-sync-bx8fg" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.263585 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxgkv\" (UniqueName: \"kubernetes.io/projected/16010248-d22e-4551-a3ba-f8b61f6ae440-kube-api-access-qxgkv\") pod \"keystone-db-sync-bx8fg\" (UID: \"16010248-d22e-4551-a3ba-f8b61f6ae440\") " pod="openstack/keystone-db-sync-bx8fg" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.263662 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16010248-d22e-4551-a3ba-f8b61f6ae440-config-data\") pod \"keystone-db-sync-bx8fg\" (UID: \"16010248-d22e-4551-a3ba-f8b61f6ae440\") " pod="openstack/keystone-db-sync-bx8fg" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.265763 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0ea0-account-create-ckhf9" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.276986 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16010248-d22e-4551-a3ba-f8b61f6ae440-config-data\") pod \"keystone-db-sync-bx8fg\" (UID: \"16010248-d22e-4551-a3ba-f8b61f6ae440\") " pod="openstack/keystone-db-sync-bx8fg" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.281898 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16010248-d22e-4551-a3ba-f8b61f6ae440-combined-ca-bundle\") pod \"keystone-db-sync-bx8fg\" (UID: \"16010248-d22e-4551-a3ba-f8b61f6ae440\") " pod="openstack/keystone-db-sync-bx8fg" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.288536 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.300752 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0ea0-account-create-ckhf9"] Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.329250 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxgkv\" (UniqueName: \"kubernetes.io/projected/16010248-d22e-4551-a3ba-f8b61f6ae440-kube-api-access-qxgkv\") pod \"keystone-db-sync-bx8fg\" (UID: \"16010248-d22e-4551-a3ba-f8b61f6ae440\") " pod="openstack/keystone-db-sync-bx8fg" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.329634 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x4kkz" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.365912 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcfb8371-3ece-4ec3-871c-d9eb12e4eb58-operator-scripts\") pod \"neutron-0ea0-account-create-ckhf9\" (UID: \"fcfb8371-3ece-4ec3-871c-d9eb12e4eb58\") " pod="openstack/neutron-0ea0-account-create-ckhf9" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.365964 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k4qx\" (UniqueName: \"kubernetes.io/projected/fcfb8371-3ece-4ec3-871c-d9eb12e4eb58-kube-api-access-7k4qx\") pod \"neutron-0ea0-account-create-ckhf9\" (UID: \"fcfb8371-3ece-4ec3-871c-d9eb12e4eb58\") " pod="openstack/neutron-0ea0-account-create-ckhf9" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.386962 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-488f-account-create-zckr4" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.443665 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bx8fg" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.466936 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcfb8371-3ece-4ec3-871c-d9eb12e4eb58-operator-scripts\") pod \"neutron-0ea0-account-create-ckhf9\" (UID: \"fcfb8371-3ece-4ec3-871c-d9eb12e4eb58\") " pod="openstack/neutron-0ea0-account-create-ckhf9" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.466996 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k4qx\" (UniqueName: \"kubernetes.io/projected/fcfb8371-3ece-4ec3-871c-d9eb12e4eb58-kube-api-access-7k4qx\") pod \"neutron-0ea0-account-create-ckhf9\" (UID: \"fcfb8371-3ece-4ec3-871c-d9eb12e4eb58\") " pod="openstack/neutron-0ea0-account-create-ckhf9" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.470286 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcfb8371-3ece-4ec3-871c-d9eb12e4eb58-operator-scripts\") pod \"neutron-0ea0-account-create-ckhf9\" (UID: \"fcfb8371-3ece-4ec3-871c-d9eb12e4eb58\") " pod="openstack/neutron-0ea0-account-create-ckhf9" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.486305 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k4qx\" (UniqueName: \"kubernetes.io/projected/fcfb8371-3ece-4ec3-871c-d9eb12e4eb58-kube-api-access-7k4qx\") pod \"neutron-0ea0-account-create-ckhf9\" (UID: \"fcfb8371-3ece-4ec3-871c-d9eb12e4eb58\") " pod="openstack/neutron-0ea0-account-create-ckhf9" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.637426 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0ea0-account-create-ckhf9" Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.730315 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b51e-account-create-chwq5"] Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.764160 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kknhq"] Nov 24 09:11:03 crc kubenswrapper[4719]: I1124 09:11:03.920382 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xtx8w"] Nov 24 09:11:03 crc kubenswrapper[4719]: W1124 09:11:03.967601 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe421b32_1776_4720_b49e_0188e6cbad0f.slice/crio-97f960685880c805373d96553471867e4720317d542f29e3c05aecde1b449158 WatchSource:0}: Error finding container 97f960685880c805373d96553471867e4720317d542f29e3c05aecde1b449158: Status 404 returned error can't find the container with id 97f960685880c805373d96553471867e4720317d542f29e3c05aecde1b449158 Nov 24 09:11:04 crc kubenswrapper[4719]: I1124 09:11:04.020882 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b51e-account-create-chwq5" event={"ID":"28882cd2-f05b-4e9a-8e96-1c49236337db","Type":"ContainerStarted","Data":"ceb0e62253ea507a02a285a6bff94d32506c6370e3d8c677e68fe7fc52a1276d"} Nov 24 09:11:04 crc kubenswrapper[4719]: I1124 09:11:04.023755 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kknhq" event={"ID":"e008dc82-a46e-4cb3-b2c7-d05598f51373","Type":"ContainerStarted","Data":"879f7174ac5d1e17c1c2a4b5627e7a60b451e11d946f01fb1632a35ad5ad2d13"} Nov 24 09:11:04 crc kubenswrapper[4719]: I1124 09:11:04.025963 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xtx8w" event={"ID":"be421b32-1776-4720-b49e-0188e6cbad0f","Type":"ContainerStarted","Data":"97f960685880c805373d96553471867e4720317d542f29e3c05aecde1b449158"} Nov 24 09:11:04 crc kubenswrapper[4719]: I1124 09:11:04.160663 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-488f-account-create-zckr4"] Nov 24 09:11:04 crc kubenswrapper[4719]: W1124 09:11:04.168355 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a11d0ea_ed2f_4fa2_bcd9_e91d22b0478b.slice/crio-a544cf3fac5d14503abfe639429e32adf548cae47b74d654c2241a8116185dba WatchSource:0}: Error finding container a544cf3fac5d14503abfe639429e32adf548cae47b74d654c2241a8116185dba: Status 404 returned error can't find the container with id a544cf3fac5d14503abfe639429e32adf548cae47b74d654c2241a8116185dba Nov 24 09:11:04 crc kubenswrapper[4719]: I1124 09:11:04.193141 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x4kkz"] Nov 24 09:11:04 crc kubenswrapper[4719]: W1124 09:11:04.207172 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4c080d6_f9b4_42d9_a09c_efad1904b2cf.slice/crio-ef2523e288fb1b28f0c1ba9a0104075b64c23278fb73d76fae27ca8568c39fa8 WatchSource:0}: Error finding container ef2523e288fb1b28f0c1ba9a0104075b64c23278fb73d76fae27ca8568c39fa8: Status 404 returned error can't find the container with id ef2523e288fb1b28f0c1ba9a0104075b64c23278fb73d76fae27ca8568c39fa8 Nov 24 09:11:04 crc kubenswrapper[4719]: I1124 09:11:04.316366 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bx8fg"] Nov 24 09:11:04 crc kubenswrapper[4719]: I1124 09:11:04.359538 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0ea0-account-create-ckhf9"] Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.033633 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bx8fg" event={"ID":"16010248-d22e-4551-a3ba-f8b61f6ae440","Type":"ContainerStarted","Data":"db8cd74550b035bea05eee170323be9e393ae17175546f4844c0c258715000c4"} Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.040349 4719 generic.go:334] "Generic (PLEG): container finished" podID="be421b32-1776-4720-b49e-0188e6cbad0f" containerID="2e411a4c5763552bd33d79cfac2eb365a29bb37527babb63883fc74366bb2565" exitCode=0 Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.040437 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xtx8w" event={"ID":"be421b32-1776-4720-b49e-0188e6cbad0f","Type":"ContainerDied","Data":"2e411a4c5763552bd33d79cfac2eb365a29bb37527babb63883fc74366bb2565"} Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.048627 4719 generic.go:334] "Generic (PLEG): container finished" podID="a4c080d6-f9b4-42d9-a09c-efad1904b2cf" containerID="3cdc16f819fb81378f78092adf291bd1d2869a5d97e109d35ff9fe78567b2521" exitCode=0 Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.048685 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x4kkz" event={"ID":"a4c080d6-f9b4-42d9-a09c-efad1904b2cf","Type":"ContainerDied","Data":"3cdc16f819fb81378f78092adf291bd1d2869a5d97e109d35ff9fe78567b2521"} Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.048754 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x4kkz" event={"ID":"a4c080d6-f9b4-42d9-a09c-efad1904b2cf","Type":"ContainerStarted","Data":"ef2523e288fb1b28f0c1ba9a0104075b64c23278fb73d76fae27ca8568c39fa8"} Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.054468 4719 generic.go:334] "Generic (PLEG): container finished" podID="28882cd2-f05b-4e9a-8e96-1c49236337db" containerID="3516c77303a15e0a2dbdc863658ea007d3438f722ddcee5e99c75463c8a928e4" exitCode=0 Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.054539 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b51e-account-create-chwq5" event={"ID":"28882cd2-f05b-4e9a-8e96-1c49236337db","Type":"ContainerDied","Data":"3516c77303a15e0a2dbdc863658ea007d3438f722ddcee5e99c75463c8a928e4"} Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.059817 4719 generic.go:334] "Generic (PLEG): container finished" podID="e008dc82-a46e-4cb3-b2c7-d05598f51373" containerID="651ef74065ee33b2f85b28c87044ef020143932f55463ff20813d1420b44021b" exitCode=0 Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.059905 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kknhq" event={"ID":"e008dc82-a46e-4cb3-b2c7-d05598f51373","Type":"ContainerDied","Data":"651ef74065ee33b2f85b28c87044ef020143932f55463ff20813d1420b44021b"} Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.068183 4719 generic.go:334] "Generic (PLEG): container finished" podID="5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b" containerID="e29bee50aa3a67544b73ad8537d937852c7a176571fc24018ee61a8b15b59ed4" exitCode=0 Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.068265 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-488f-account-create-zckr4" event={"ID":"5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b","Type":"ContainerDied","Data":"e29bee50aa3a67544b73ad8537d937852c7a176571fc24018ee61a8b15b59ed4"} Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.068299 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-488f-account-create-zckr4" event={"ID":"5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b","Type":"ContainerStarted","Data":"a544cf3fac5d14503abfe639429e32adf548cae47b74d654c2241a8116185dba"} Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.071559 4719 generic.go:334] "Generic (PLEG): container finished" podID="fcfb8371-3ece-4ec3-871c-d9eb12e4eb58" containerID="e3e2f7e1de4576458f3052e3486213a2242e885e5e1316121c34f5d097b4fcef" exitCode=0 Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.071611 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0ea0-account-create-ckhf9" event={"ID":"fcfb8371-3ece-4ec3-871c-d9eb12e4eb58","Type":"ContainerDied","Data":"e3e2f7e1de4576458f3052e3486213a2242e885e5e1316121c34f5d097b4fcef"} Nov 24 09:11:05 crc kubenswrapper[4719]: I1124 09:11:05.071639 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0ea0-account-create-ckhf9" event={"ID":"fcfb8371-3ece-4ec3-871c-d9eb12e4eb58","Type":"ContainerStarted","Data":"33e240cad246c13e86026f595eae8f887f4be13d2bd0497bbf1a1a75d7ea91e3"} Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.577334 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xtx8w" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.636647 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw8d2\" (UniqueName: \"kubernetes.io/projected/be421b32-1776-4720-b49e-0188e6cbad0f-kube-api-access-fw8d2\") pod \"be421b32-1776-4720-b49e-0188e6cbad0f\" (UID: \"be421b32-1776-4720-b49e-0188e6cbad0f\") " Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.636779 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be421b32-1776-4720-b49e-0188e6cbad0f-operator-scripts\") pod \"be421b32-1776-4720-b49e-0188e6cbad0f\" (UID: \"be421b32-1776-4720-b49e-0188e6cbad0f\") " Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.650447 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be421b32-1776-4720-b49e-0188e6cbad0f-kube-api-access-fw8d2" (OuterVolumeSpecName: "kube-api-access-fw8d2") pod "be421b32-1776-4720-b49e-0188e6cbad0f" (UID: "be421b32-1776-4720-b49e-0188e6cbad0f"). InnerVolumeSpecName "kube-api-access-fw8d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.660460 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be421b32-1776-4720-b49e-0188e6cbad0f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be421b32-1776-4720-b49e-0188e6cbad0f" (UID: "be421b32-1776-4720-b49e-0188e6cbad0f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.739152 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be421b32-1776-4720-b49e-0188e6cbad0f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.739190 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw8d2\" (UniqueName: \"kubernetes.io/projected/be421b32-1776-4720-b49e-0188e6cbad0f-kube-api-access-fw8d2\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.777890 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x4kkz" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.791471 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kknhq" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.800422 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-488f-account-create-zckr4" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.816303 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0ea0-account-create-ckhf9" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.817498 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b51e-account-create-chwq5" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.840138 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dgbh\" (UniqueName: \"kubernetes.io/projected/a4c080d6-f9b4-42d9-a09c-efad1904b2cf-kube-api-access-2dgbh\") pod \"a4c080d6-f9b4-42d9-a09c-efad1904b2cf\" (UID: \"a4c080d6-f9b4-42d9-a09c-efad1904b2cf\") " Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.840252 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4c080d6-f9b4-42d9-a09c-efad1904b2cf-operator-scripts\") pod \"a4c080d6-f9b4-42d9-a09c-efad1904b2cf\" (UID: \"a4c080d6-f9b4-42d9-a09c-efad1904b2cf\") " Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.840346 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e008dc82-a46e-4cb3-b2c7-d05598f51373-operator-scripts\") pod \"e008dc82-a46e-4cb3-b2c7-d05598f51373\" (UID: \"e008dc82-a46e-4cb3-b2c7-d05598f51373\") " Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.840403 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc8c6\" (UniqueName: \"kubernetes.io/projected/e008dc82-a46e-4cb3-b2c7-d05598f51373-kube-api-access-bc8c6\") pod \"e008dc82-a46e-4cb3-b2c7-d05598f51373\" (UID: \"e008dc82-a46e-4cb3-b2c7-d05598f51373\") " Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.841324 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4c080d6-f9b4-42d9-a09c-efad1904b2cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4c080d6-f9b4-42d9-a09c-efad1904b2cf" (UID: "a4c080d6-f9b4-42d9-a09c-efad1904b2cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.841446 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e008dc82-a46e-4cb3-b2c7-d05598f51373-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e008dc82-a46e-4cb3-b2c7-d05598f51373" (UID: "e008dc82-a46e-4cb3-b2c7-d05598f51373"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.843670 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e008dc82-a46e-4cb3-b2c7-d05598f51373-kube-api-access-bc8c6" (OuterVolumeSpecName: "kube-api-access-bc8c6") pod "e008dc82-a46e-4cb3-b2c7-d05598f51373" (UID: "e008dc82-a46e-4cb3-b2c7-d05598f51373"). InnerVolumeSpecName "kube-api-access-bc8c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.849852 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c080d6-f9b4-42d9-a09c-efad1904b2cf-kube-api-access-2dgbh" (OuterVolumeSpecName: "kube-api-access-2dgbh") pod "a4c080d6-f9b4-42d9-a09c-efad1904b2cf" (UID: "a4c080d6-f9b4-42d9-a09c-efad1904b2cf"). InnerVolumeSpecName "kube-api-access-2dgbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.941824 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8662g\" (UniqueName: \"kubernetes.io/projected/28882cd2-f05b-4e9a-8e96-1c49236337db-kube-api-access-8662g\") pod \"28882cd2-f05b-4e9a-8e96-1c49236337db\" (UID: \"28882cd2-f05b-4e9a-8e96-1c49236337db\") " Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.941886 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k4qx\" (UniqueName: \"kubernetes.io/projected/fcfb8371-3ece-4ec3-871c-d9eb12e4eb58-kube-api-access-7k4qx\") pod \"fcfb8371-3ece-4ec3-871c-d9eb12e4eb58\" (UID: \"fcfb8371-3ece-4ec3-871c-d9eb12e4eb58\") " Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.941916 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpslf\" (UniqueName: \"kubernetes.io/projected/5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b-kube-api-access-qpslf\") pod \"5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b\" (UID: \"5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b\") " Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.941951 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b-operator-scripts\") pod \"5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b\" (UID: \"5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b\") " Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.942165 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28882cd2-f05b-4e9a-8e96-1c49236337db-operator-scripts\") pod \"28882cd2-f05b-4e9a-8e96-1c49236337db\" (UID: \"28882cd2-f05b-4e9a-8e96-1c49236337db\") " Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.942193 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcfb8371-3ece-4ec3-871c-d9eb12e4eb58-operator-scripts\") pod \"fcfb8371-3ece-4ec3-871c-d9eb12e4eb58\" (UID: \"fcfb8371-3ece-4ec3-871c-d9eb12e4eb58\") " Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.942550 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc8c6\" (UniqueName: \"kubernetes.io/projected/e008dc82-a46e-4cb3-b2c7-d05598f51373-kube-api-access-bc8c6\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.942551 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b" (UID: "5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.942572 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dgbh\" (UniqueName: \"kubernetes.io/projected/a4c080d6-f9b4-42d9-a09c-efad1904b2cf-kube-api-access-2dgbh\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.942615 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4c080d6-f9b4-42d9-a09c-efad1904b2cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.942628 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e008dc82-a46e-4cb3-b2c7-d05598f51373-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.942846 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28882cd2-f05b-4e9a-8e96-1c49236337db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28882cd2-f05b-4e9a-8e96-1c49236337db" (UID: "28882cd2-f05b-4e9a-8e96-1c49236337db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.942903 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcfb8371-3ece-4ec3-871c-d9eb12e4eb58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fcfb8371-3ece-4ec3-871c-d9eb12e4eb58" (UID: "fcfb8371-3ece-4ec3-871c-d9eb12e4eb58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.945065 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b-kube-api-access-qpslf" (OuterVolumeSpecName: "kube-api-access-qpslf") pod "5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b" (UID: "5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b"). InnerVolumeSpecName "kube-api-access-qpslf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.945517 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28882cd2-f05b-4e9a-8e96-1c49236337db-kube-api-access-8662g" (OuterVolumeSpecName: "kube-api-access-8662g") pod "28882cd2-f05b-4e9a-8e96-1c49236337db" (UID: "28882cd2-f05b-4e9a-8e96-1c49236337db"). InnerVolumeSpecName "kube-api-access-8662g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:06 crc kubenswrapper[4719]: I1124 09:11:06.946157 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcfb8371-3ece-4ec3-871c-d9eb12e4eb58-kube-api-access-7k4qx" (OuterVolumeSpecName: "kube-api-access-7k4qx") pod "fcfb8371-3ece-4ec3-871c-d9eb12e4eb58" (UID: "fcfb8371-3ece-4ec3-871c-d9eb12e4eb58"). InnerVolumeSpecName "kube-api-access-7k4qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.043956 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.043990 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28882cd2-f05b-4e9a-8e96-1c49236337db-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.043999 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcfb8371-3ece-4ec3-871c-d9eb12e4eb58-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.044008 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8662g\" (UniqueName: \"kubernetes.io/projected/28882cd2-f05b-4e9a-8e96-1c49236337db-kube-api-access-8662g\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.044019 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k4qx\" (UniqueName: \"kubernetes.io/projected/fcfb8371-3ece-4ec3-871c-d9eb12e4eb58-kube-api-access-7k4qx\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.044027 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpslf\" (UniqueName: \"kubernetes.io/projected/5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b-kube-api-access-qpslf\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.090593 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0ea0-account-create-ckhf9" event={"ID":"fcfb8371-3ece-4ec3-871c-d9eb12e4eb58","Type":"ContainerDied","Data":"33e240cad246c13e86026f595eae8f887f4be13d2bd0497bbf1a1a75d7ea91e3"} Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.090647 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33e240cad246c13e86026f595eae8f887f4be13d2bd0497bbf1a1a75d7ea91e3" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.090604 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0ea0-account-create-ckhf9" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.091946 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xtx8w" event={"ID":"be421b32-1776-4720-b49e-0188e6cbad0f","Type":"ContainerDied","Data":"97f960685880c805373d96553471867e4720317d542f29e3c05aecde1b449158"} Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.091964 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xtx8w" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.091968 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97f960685880c805373d96553471867e4720317d542f29e3c05aecde1b449158" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.093310 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x4kkz" event={"ID":"a4c080d6-f9b4-42d9-a09c-efad1904b2cf","Type":"ContainerDied","Data":"ef2523e288fb1b28f0c1ba9a0104075b64c23278fb73d76fae27ca8568c39fa8"} Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.093341 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef2523e288fb1b28f0c1ba9a0104075b64c23278fb73d76fae27ca8568c39fa8" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.093349 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x4kkz" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.094434 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b51e-account-create-chwq5" event={"ID":"28882cd2-f05b-4e9a-8e96-1c49236337db","Type":"ContainerDied","Data":"ceb0e62253ea507a02a285a6bff94d32506c6370e3d8c677e68fe7fc52a1276d"} Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.094477 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceb0e62253ea507a02a285a6bff94d32506c6370e3d8c677e68fe7fc52a1276d" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.094484 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b51e-account-create-chwq5" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.096413 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kknhq" event={"ID":"e008dc82-a46e-4cb3-b2c7-d05598f51373","Type":"ContainerDied","Data":"879f7174ac5d1e17c1c2a4b5627e7a60b451e11d946f01fb1632a35ad5ad2d13"} Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.096427 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kknhq" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.096438 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="879f7174ac5d1e17c1c2a4b5627e7a60b451e11d946f01fb1632a35ad5ad2d13" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.097816 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-488f-account-create-zckr4" event={"ID":"5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b","Type":"ContainerDied","Data":"a544cf3fac5d14503abfe639429e32adf548cae47b74d654c2241a8116185dba"} Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.097840 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a544cf3fac5d14503abfe639429e32adf548cae47b74d654c2241a8116185dba" Nov 24 09:11:07 crc kubenswrapper[4719]: I1124 09:11:07.097843 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-488f-account-create-zckr4" Nov 24 09:11:10 crc kubenswrapper[4719]: I1124 09:11:10.122726 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bx8fg" event={"ID":"16010248-d22e-4551-a3ba-f8b61f6ae440","Type":"ContainerStarted","Data":"00c542ad24716575f59444038f676feaa5fa431f3827a880e2d8df112f5fbfbf"} Nov 24 09:11:10 crc kubenswrapper[4719]: I1124 09:11:10.160144 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-bx8fg" podStartSLOduration=1.61979812 podStartE2EDuration="7.160116915s" podCreationTimestamp="2025-11-24 09:11:03 +0000 UTC" firstStartedPulling="2025-11-24 09:11:04.356211708 +0000 UTC m=+1040.687484960" lastFinishedPulling="2025-11-24 09:11:09.896530503 +0000 UTC m=+1046.227803755" observedRunningTime="2025-11-24 09:11:10.148571371 +0000 UTC m=+1046.479844653" watchObservedRunningTime="2025-11-24 09:11:10.160116915 +0000 UTC m=+1046.491390197" Nov 24 09:11:10 crc kubenswrapper[4719]: I1124 09:11:10.710203 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:10 crc kubenswrapper[4719]: I1124 09:11:10.785736 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-68ff6"] Nov 24 09:11:10 crc kubenswrapper[4719]: I1124 09:11:10.786172 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-68ff6" podUID="9e84dd25-4828-43e5-80a8-25307b77944f" containerName="dnsmasq-dns" containerID="cri-o://5aca69983d94956e6f451ad5e0919275f0d18427dee24d4b8ca5a1d0d74f7d28" gracePeriod=10 Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.132802 4719 generic.go:334] "Generic (PLEG): container finished" podID="9e84dd25-4828-43e5-80a8-25307b77944f" containerID="5aca69983d94956e6f451ad5e0919275f0d18427dee24d4b8ca5a1d0d74f7d28" exitCode=0 Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.133942 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-68ff6" event={"ID":"9e84dd25-4828-43e5-80a8-25307b77944f","Type":"ContainerDied","Data":"5aca69983d94956e6f451ad5e0919275f0d18427dee24d4b8ca5a1d0d74f7d28"} Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.347640 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.428650 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-dns-svc\") pod \"9e84dd25-4828-43e5-80a8-25307b77944f\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.428711 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc6nc\" (UniqueName: \"kubernetes.io/projected/9e84dd25-4828-43e5-80a8-25307b77944f-kube-api-access-mc6nc\") pod \"9e84dd25-4828-43e5-80a8-25307b77944f\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.428826 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-ovsdbserver-sb\") pod \"9e84dd25-4828-43e5-80a8-25307b77944f\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.428880 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-ovsdbserver-nb\") pod \"9e84dd25-4828-43e5-80a8-25307b77944f\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.428935 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-config\") pod \"9e84dd25-4828-43e5-80a8-25307b77944f\" (UID: \"9e84dd25-4828-43e5-80a8-25307b77944f\") " Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.435048 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e84dd25-4828-43e5-80a8-25307b77944f-kube-api-access-mc6nc" (OuterVolumeSpecName: "kube-api-access-mc6nc") pod "9e84dd25-4828-43e5-80a8-25307b77944f" (UID: "9e84dd25-4828-43e5-80a8-25307b77944f"). InnerVolumeSpecName "kube-api-access-mc6nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.533051 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc6nc\" (UniqueName: \"kubernetes.io/projected/9e84dd25-4828-43e5-80a8-25307b77944f-kube-api-access-mc6nc\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.560838 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9e84dd25-4828-43e5-80a8-25307b77944f" (UID: "9e84dd25-4828-43e5-80a8-25307b77944f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.566653 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-config" (OuterVolumeSpecName: "config") pod "9e84dd25-4828-43e5-80a8-25307b77944f" (UID: "9e84dd25-4828-43e5-80a8-25307b77944f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.575953 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9e84dd25-4828-43e5-80a8-25307b77944f" (UID: "9e84dd25-4828-43e5-80a8-25307b77944f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.576480 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9e84dd25-4828-43e5-80a8-25307b77944f" (UID: "9e84dd25-4828-43e5-80a8-25307b77944f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.635991 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.636051 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.636065 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:11 crc kubenswrapper[4719]: I1124 09:11:11.636076 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e84dd25-4828-43e5-80a8-25307b77944f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:12 crc kubenswrapper[4719]: I1124 09:11:12.142571 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-68ff6" event={"ID":"9e84dd25-4828-43e5-80a8-25307b77944f","Type":"ContainerDied","Data":"b87e4c0d23ab5d186e1d039dbba4b254e62416371e1de4aa586f5e25f336a0c5"} Nov 24 09:11:12 crc kubenswrapper[4719]: I1124 09:11:12.142600 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-68ff6" Nov 24 09:11:12 crc kubenswrapper[4719]: I1124 09:11:12.142636 4719 scope.go:117] "RemoveContainer" containerID="5aca69983d94956e6f451ad5e0919275f0d18427dee24d4b8ca5a1d0d74f7d28" Nov 24 09:11:12 crc kubenswrapper[4719]: I1124 09:11:12.159030 4719 scope.go:117] "RemoveContainer" containerID="3d6a584fa4445dc408eedaf1d6e870521f71641f51da0f8c7ee432a33755c167" Nov 24 09:11:12 crc kubenswrapper[4719]: I1124 09:11:12.175281 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-68ff6"] Nov 24 09:11:12 crc kubenswrapper[4719]: I1124 09:11:12.180836 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-68ff6"] Nov 24 09:11:12 crc kubenswrapper[4719]: I1124 09:11:12.532654 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e84dd25-4828-43e5-80a8-25307b77944f" path="/var/lib/kubelet/pods/9e84dd25-4828-43e5-80a8-25307b77944f/volumes" Nov 24 09:11:14 crc kubenswrapper[4719]: I1124 09:11:14.157477 4719 generic.go:334] "Generic (PLEG): container finished" podID="16010248-d22e-4551-a3ba-f8b61f6ae440" containerID="00c542ad24716575f59444038f676feaa5fa431f3827a880e2d8df112f5fbfbf" exitCode=0 Nov 24 09:11:14 crc kubenswrapper[4719]: I1124 09:11:14.157548 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bx8fg" event={"ID":"16010248-d22e-4551-a3ba-f8b61f6ae440","Type":"ContainerDied","Data":"00c542ad24716575f59444038f676feaa5fa431f3827a880e2d8df112f5fbfbf"} Nov 24 09:11:15 crc kubenswrapper[4719]: I1124 09:11:15.486252 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bx8fg" Nov 24 09:11:15 crc kubenswrapper[4719]: I1124 09:11:15.595381 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16010248-d22e-4551-a3ba-f8b61f6ae440-combined-ca-bundle\") pod \"16010248-d22e-4551-a3ba-f8b61f6ae440\" (UID: \"16010248-d22e-4551-a3ba-f8b61f6ae440\") " Nov 24 09:11:15 crc kubenswrapper[4719]: I1124 09:11:15.595886 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxgkv\" (UniqueName: \"kubernetes.io/projected/16010248-d22e-4551-a3ba-f8b61f6ae440-kube-api-access-qxgkv\") pod \"16010248-d22e-4551-a3ba-f8b61f6ae440\" (UID: \"16010248-d22e-4551-a3ba-f8b61f6ae440\") " Nov 24 09:11:15 crc kubenswrapper[4719]: I1124 09:11:15.596169 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16010248-d22e-4551-a3ba-f8b61f6ae440-config-data\") pod \"16010248-d22e-4551-a3ba-f8b61f6ae440\" (UID: \"16010248-d22e-4551-a3ba-f8b61f6ae440\") " Nov 24 09:11:15 crc kubenswrapper[4719]: I1124 09:11:15.601210 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16010248-d22e-4551-a3ba-f8b61f6ae440-kube-api-access-qxgkv" (OuterVolumeSpecName: "kube-api-access-qxgkv") pod "16010248-d22e-4551-a3ba-f8b61f6ae440" (UID: "16010248-d22e-4551-a3ba-f8b61f6ae440"). InnerVolumeSpecName "kube-api-access-qxgkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:15 crc kubenswrapper[4719]: I1124 09:11:15.623396 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16010248-d22e-4551-a3ba-f8b61f6ae440-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16010248-d22e-4551-a3ba-f8b61f6ae440" (UID: "16010248-d22e-4551-a3ba-f8b61f6ae440"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:15 crc kubenswrapper[4719]: I1124 09:11:15.637246 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16010248-d22e-4551-a3ba-f8b61f6ae440-config-data" (OuterVolumeSpecName: "config-data") pod "16010248-d22e-4551-a3ba-f8b61f6ae440" (UID: "16010248-d22e-4551-a3ba-f8b61f6ae440"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:15 crc kubenswrapper[4719]: I1124 09:11:15.698748 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxgkv\" (UniqueName: \"kubernetes.io/projected/16010248-d22e-4551-a3ba-f8b61f6ae440-kube-api-access-qxgkv\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:15 crc kubenswrapper[4719]: I1124 09:11:15.699156 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16010248-d22e-4551-a3ba-f8b61f6ae440-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:15 crc kubenswrapper[4719]: I1124 09:11:15.699253 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16010248-d22e-4551-a3ba-f8b61f6ae440-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.176601 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bx8fg" event={"ID":"16010248-d22e-4551-a3ba-f8b61f6ae440","Type":"ContainerDied","Data":"db8cd74550b035bea05eee170323be9e393ae17175546f4844c0c258715000c4"} Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.176817 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db8cd74550b035bea05eee170323be9e393ae17175546f4844c0c258715000c4" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.176969 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bx8fg" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.403088 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pxqhk"] Nov 24 09:11:16 crc kubenswrapper[4719]: E1124 09:11:16.403715 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e84dd25-4828-43e5-80a8-25307b77944f" containerName="dnsmasq-dns" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.403735 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e84dd25-4828-43e5-80a8-25307b77944f" containerName="dnsmasq-dns" Nov 24 09:11:16 crc kubenswrapper[4719]: E1124 09:11:16.403750 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcfb8371-3ece-4ec3-871c-d9eb12e4eb58" containerName="mariadb-account-create" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.403758 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcfb8371-3ece-4ec3-871c-d9eb12e4eb58" containerName="mariadb-account-create" Nov 24 09:11:16 crc kubenswrapper[4719]: E1124 09:11:16.403784 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16010248-d22e-4551-a3ba-f8b61f6ae440" containerName="keystone-db-sync" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.403791 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="16010248-d22e-4551-a3ba-f8b61f6ae440" containerName="keystone-db-sync" Nov 24 09:11:16 crc kubenswrapper[4719]: E1124 09:11:16.403809 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c080d6-f9b4-42d9-a09c-efad1904b2cf" containerName="mariadb-database-create" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.403817 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c080d6-f9b4-42d9-a09c-efad1904b2cf" containerName="mariadb-database-create" Nov 24 09:11:16 crc kubenswrapper[4719]: E1124 09:11:16.403829 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be421b32-1776-4720-b49e-0188e6cbad0f" containerName="mariadb-database-create" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.403839 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="be421b32-1776-4720-b49e-0188e6cbad0f" containerName="mariadb-database-create" Nov 24 09:11:16 crc kubenswrapper[4719]: E1124 09:11:16.403860 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e008dc82-a46e-4cb3-b2c7-d05598f51373" containerName="mariadb-database-create" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.403868 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="e008dc82-a46e-4cb3-b2c7-d05598f51373" containerName="mariadb-database-create" Nov 24 09:11:16 crc kubenswrapper[4719]: E1124 09:11:16.403886 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28882cd2-f05b-4e9a-8e96-1c49236337db" containerName="mariadb-account-create" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.403895 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="28882cd2-f05b-4e9a-8e96-1c49236337db" containerName="mariadb-account-create" Nov 24 09:11:16 crc kubenswrapper[4719]: E1124 09:11:16.403907 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e84dd25-4828-43e5-80a8-25307b77944f" containerName="init" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.403914 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e84dd25-4828-43e5-80a8-25307b77944f" containerName="init" Nov 24 09:11:16 crc kubenswrapper[4719]: E1124 09:11:16.403926 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b" containerName="mariadb-account-create" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.403933 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b" containerName="mariadb-account-create" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.404141 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcfb8371-3ece-4ec3-871c-d9eb12e4eb58" containerName="mariadb-account-create" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.404180 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="be421b32-1776-4720-b49e-0188e6cbad0f" containerName="mariadb-database-create" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.404199 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c080d6-f9b4-42d9-a09c-efad1904b2cf" containerName="mariadb-database-create" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.404216 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="e008dc82-a46e-4cb3-b2c7-d05598f51373" containerName="mariadb-database-create" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.404240 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e84dd25-4828-43e5-80a8-25307b77944f" containerName="dnsmasq-dns" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.404253 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="28882cd2-f05b-4e9a-8e96-1c49236337db" containerName="mariadb-account-create" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.404275 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="16010248-d22e-4551-a3ba-f8b61f6ae440" containerName="keystone-db-sync" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.404294 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b" containerName="mariadb-account-create" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.404927 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.409817 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.410021 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.410167 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.410092 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.410132 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-d4gqc" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.475726 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pxqhk"] Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.525603 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-fernet-keys\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.525687 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-combined-ca-bundle\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.525733 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-config-data\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.525773 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-scripts\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.525822 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtn67\" (UniqueName: \"kubernetes.io/projected/5285f1c4-f873-488b-bb55-643779ff8672-kube-api-access-jtn67\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.525857 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-credential-keys\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.574994 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67795cd9-dqzq7"] Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.576382 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.594797 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-dqzq7"] Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.629363 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.629495 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-fernet-keys\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.629523 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.629543 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-combined-ca-bundle\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.629564 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-config-data\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.629592 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-scripts\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.629609 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn69p\" (UniqueName: \"kubernetes.io/projected/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-kube-api-access-qn69p\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.629631 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-config\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.629652 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtn67\" (UniqueName: \"kubernetes.io/projected/5285f1c4-f873-488b-bb55-643779ff8672-kube-api-access-jtn67\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.629668 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-credential-keys\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.629698 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-dns-svc\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.636299 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-scripts\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.641849 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-config-data\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.644799 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-fernet-keys\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.645238 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-combined-ca-bundle\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.670833 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-credential-keys\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.673549 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtn67\" (UniqueName: \"kubernetes.io/projected/5285f1c4-f873-488b-bb55-643779ff8672-kube-api-access-jtn67\") pod \"keystone-bootstrap-pxqhk\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.730940 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.731019 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn69p\" (UniqueName: \"kubernetes.io/projected/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-kube-api-access-qn69p\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.731071 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-config\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.731118 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-dns-svc\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.731149 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.732166 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.732794 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.734059 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-config\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.734691 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-dns-svc\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.743534 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.779029 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn69p\" (UniqueName: \"kubernetes.io/projected/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-kube-api-access-qn69p\") pod \"dnsmasq-dns-67795cd9-dqzq7\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.852109 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-8bn65"] Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.853408 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.867548 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.867811 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-h75nh" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.867970 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.886580 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-k2l9n"] Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.887672 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-k2l9n" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.903776 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.903852 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d798x" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.904156 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.917780 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.921614 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8bn65"] Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.935120 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-k2l9n"] Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.939009 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-scripts\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.939103 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/902a4567-228a-43e0-b6c4-c323c4366c94-etc-machine-id\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.939131 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-db-sync-config-data\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.939151 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-combined-ca-bundle\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.939173 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-config-data\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.939222 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32da9e0b-97ee-48e0-bdd2-2c21bb019294-config\") pod \"neutron-db-sync-k2l9n\" (UID: \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\") " pod="openstack/neutron-db-sync-k2l9n" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.939268 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsj2x\" (UniqueName: \"kubernetes.io/projected/32da9e0b-97ee-48e0-bdd2-2c21bb019294-kube-api-access-gsj2x\") pod \"neutron-db-sync-k2l9n\" (UID: \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\") " pod="openstack/neutron-db-sync-k2l9n" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.939311 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s6sb\" (UniqueName: \"kubernetes.io/projected/902a4567-228a-43e0-b6c4-c323c4366c94-kube-api-access-2s6sb\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:16 crc kubenswrapper[4719]: I1124 09:11:16.939497 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32da9e0b-97ee-48e0-bdd2-2c21bb019294-combined-ca-bundle\") pod \"neutron-db-sync-k2l9n\" (UID: \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\") " pod="openstack/neutron-db-sync-k2l9n" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.000137 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-kggqc"] Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.011867 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kggqc" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.019272 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6rsp8" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.019426 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.037986 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kggqc"] Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.040922 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjswt\" (UniqueName: \"kubernetes.io/projected/84a9592e-0967-49ec-a421-66e027b6d56a-kube-api-access-zjswt\") pod \"barbican-db-sync-kggqc\" (UID: \"84a9592e-0967-49ec-a421-66e027b6d56a\") " pod="openstack/barbican-db-sync-kggqc" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.041004 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/902a4567-228a-43e0-b6c4-c323c4366c94-etc-machine-id\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.041025 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-db-sync-config-data\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.041056 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-combined-ca-bundle\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.041078 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-config-data\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.041117 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32da9e0b-97ee-48e0-bdd2-2c21bb019294-config\") pod \"neutron-db-sync-k2l9n\" (UID: \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\") " pod="openstack/neutron-db-sync-k2l9n" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.041147 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a9592e-0967-49ec-a421-66e027b6d56a-db-sync-config-data\") pod \"barbican-db-sync-kggqc\" (UID: \"84a9592e-0967-49ec-a421-66e027b6d56a\") " pod="openstack/barbican-db-sync-kggqc" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.041181 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsj2x\" (UniqueName: \"kubernetes.io/projected/32da9e0b-97ee-48e0-bdd2-2c21bb019294-kube-api-access-gsj2x\") pod \"neutron-db-sync-k2l9n\" (UID: \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\") " pod="openstack/neutron-db-sync-k2l9n" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.041196 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s6sb\" (UniqueName: \"kubernetes.io/projected/902a4567-228a-43e0-b6c4-c323c4366c94-kube-api-access-2s6sb\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.041223 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32da9e0b-97ee-48e0-bdd2-2c21bb019294-combined-ca-bundle\") pod \"neutron-db-sync-k2l9n\" (UID: \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\") " pod="openstack/neutron-db-sync-k2l9n" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.041240 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a9592e-0967-49ec-a421-66e027b6d56a-combined-ca-bundle\") pod \"barbican-db-sync-kggqc\" (UID: \"84a9592e-0967-49ec-a421-66e027b6d56a\") " pod="openstack/barbican-db-sync-kggqc" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.041274 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-scripts\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.045123 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/902a4567-228a-43e0-b6c4-c323c4366c94-etc-machine-id\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.060251 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-combined-ca-bundle\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.061516 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-scripts\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.062234 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-config-data\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.063196 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32da9e0b-97ee-48e0-bdd2-2c21bb019294-combined-ca-bundle\") pod \"neutron-db-sync-k2l9n\" (UID: \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\") " pod="openstack/neutron-db-sync-k2l9n" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.066593 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/32da9e0b-97ee-48e0-bdd2-2c21bb019294-config\") pod \"neutron-db-sync-k2l9n\" (UID: \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\") " pod="openstack/neutron-db-sync-k2l9n" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.075552 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-db-sync-config-data\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.090817 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s6sb\" (UniqueName: \"kubernetes.io/projected/902a4567-228a-43e0-b6c4-c323c4366c94-kube-api-access-2s6sb\") pod \"cinder-db-sync-8bn65\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.107302 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-dqzq7"] Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.112654 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsj2x\" (UniqueName: \"kubernetes.io/projected/32da9e0b-97ee-48e0-bdd2-2c21bb019294-kube-api-access-gsj2x\") pod \"neutron-db-sync-k2l9n\" (UID: \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\") " pod="openstack/neutron-db-sync-k2l9n" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.143809 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjswt\" (UniqueName: \"kubernetes.io/projected/84a9592e-0967-49ec-a421-66e027b6d56a-kube-api-access-zjswt\") pod \"barbican-db-sync-kggqc\" (UID: \"84a9592e-0967-49ec-a421-66e027b6d56a\") " pod="openstack/barbican-db-sync-kggqc" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.143916 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a9592e-0967-49ec-a421-66e027b6d56a-db-sync-config-data\") pod \"barbican-db-sync-kggqc\" (UID: \"84a9592e-0967-49ec-a421-66e027b6d56a\") " pod="openstack/barbican-db-sync-kggqc" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.143957 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a9592e-0967-49ec-a421-66e027b6d56a-combined-ca-bundle\") pod \"barbican-db-sync-kggqc\" (UID: \"84a9592e-0967-49ec-a421-66e027b6d56a\") " pod="openstack/barbican-db-sync-kggqc" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.144670 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-ht2vd"] Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.145602 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.150367 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a9592e-0967-49ec-a421-66e027b6d56a-combined-ca-bundle\") pod \"barbican-db-sync-kggqc\" (UID: \"84a9592e-0967-49ec-a421-66e027b6d56a\") " pod="openstack/barbican-db-sync-kggqc" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.152411 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.153441 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a9592e-0967-49ec-a421-66e027b6d56a-db-sync-config-data\") pod \"barbican-db-sync-kggqc\" (UID: \"84a9592e-0967-49ec-a421-66e027b6d56a\") " pod="openstack/barbican-db-sync-kggqc" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.162774 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-27d8t" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.163056 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.175592 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjswt\" (UniqueName: \"kubernetes.io/projected/84a9592e-0967-49ec-a421-66e027b6d56a-kube-api-access-zjswt\") pod \"barbican-db-sync-kggqc\" (UID: \"84a9592e-0967-49ec-a421-66e027b6d56a\") " pod="openstack/barbican-db-sync-kggqc" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.190157 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ht2vd"] Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.228581 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp"] Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.230312 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.238738 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8bn65" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.244451 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp"] Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.249000 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-combined-ca-bundle\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.249239 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e1bf4ab-344c-4335-b16a-828d28141f11-logs\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.249349 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqkt4\" (UniqueName: \"kubernetes.io/projected/2e1bf4ab-344c-4335-b16a-828d28141f11-kube-api-access-tqkt4\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.249425 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-config-data\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.249461 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-scripts\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.257505 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.264427 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.269702 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.269936 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.280428 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-k2l9n" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.296235 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.372168 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.372472 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9lk4\" (UniqueName: \"kubernetes.io/projected/db6461b8-f751-4248-a4fc-fe1b3b987706-kube-api-access-g9lk4\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.372510 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.372539 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e1bf4ab-344c-4335-b16a-828d28141f11-logs\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.372575 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-log-httpd\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.372597 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqkt4\" (UniqueName: \"kubernetes.io/projected/2e1bf4ab-344c-4335-b16a-828d28141f11-kube-api-access-tqkt4\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.372611 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-scripts\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.372610 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kggqc" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.372631 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.374403 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-run-httpd\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.374438 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-config\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.374517 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-config-data\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.374552 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-scripts\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.374623 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.374672 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzh6h\" (UniqueName: \"kubernetes.io/projected/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-kube-api-access-nzh6h\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.375162 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.375234 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-combined-ca-bundle\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.375267 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-config-data\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.379186 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e1bf4ab-344c-4335-b16a-828d28141f11-logs\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.382869 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-scripts\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.383312 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-config-data\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.399769 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-combined-ca-bundle\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.405736 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqkt4\" (UniqueName: \"kubernetes.io/projected/2e1bf4ab-344c-4335-b16a-828d28141f11-kube-api-access-tqkt4\") pod \"placement-db-sync-ht2vd\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.479740 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-config-data\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.479787 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.479816 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9lk4\" (UniqueName: \"kubernetes.io/projected/db6461b8-f751-4248-a4fc-fe1b3b987706-kube-api-access-g9lk4\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.479844 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.479883 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-log-httpd\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.479904 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-scripts\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.479923 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.479938 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-run-httpd\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.479952 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-config\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.479992 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.480013 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzh6h\" (UniqueName: \"kubernetes.io/projected/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-kube-api-access-nzh6h\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.480051 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.482238 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.483071 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-config\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.483336 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-run-httpd\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.484117 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-scripts\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.486065 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-log-httpd\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.486762 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.487373 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.488238 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.490088 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.499844 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.501871 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-config-data\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.528294 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzh6h\" (UniqueName: \"kubernetes.io/projected/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-kube-api-access-nzh6h\") pod \"ceilometer-0\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.533425 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9lk4\" (UniqueName: \"kubernetes.io/projected/db6461b8-f751-4248-a4fc-fe1b3b987706-kube-api-access-g9lk4\") pod \"dnsmasq-dns-5b6dbdb6f5-6pxsp\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.554653 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.595640 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.629093 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pxqhk"] Nov 24 09:11:17 crc kubenswrapper[4719]: I1124 09:11:17.885146 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-dqzq7"] Nov 24 09:11:17 crc kubenswrapper[4719]: W1124 09:11:17.905919 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5c96da9_e5b2_4ef8_b4c4_ae8f0868ea00.slice/crio-81fe4c22917a7365b809771575b827daac19fad8fd8fba706e51870d6a391995 WatchSource:0}: Error finding container 81fe4c22917a7365b809771575b827daac19fad8fd8fba706e51870d6a391995: Status 404 returned error can't find the container with id 81fe4c22917a7365b809771575b827daac19fad8fd8fba706e51870d6a391995 Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.006892 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8bn65"] Nov 24 09:11:18 crc kubenswrapper[4719]: W1124 09:11:18.040746 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod902a4567_228a_43e0_b6c4_c323c4366c94.slice/crio-b521947da76ad5af6d94183a514103fd7676f5dab5e26d62fd82aa58fce16584 WatchSource:0}: Error finding container b521947da76ad5af6d94183a514103fd7676f5dab5e26d62fd82aa58fce16584: Status 404 returned error can't find the container with id b521947da76ad5af6d94183a514103fd7676f5dab5e26d62fd82aa58fce16584 Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.254128 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-dqzq7" event={"ID":"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00","Type":"ContainerStarted","Data":"f5a17677d4fdb04be391d0edd7dee43f110acc420893000d18add0a4f05171ca"} Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.254183 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-dqzq7" event={"ID":"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00","Type":"ContainerStarted","Data":"81fe4c22917a7365b809771575b827daac19fad8fd8fba706e51870d6a391995"} Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.254357 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67795cd9-dqzq7" podUID="b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00" containerName="init" containerID="cri-o://f5a17677d4fdb04be391d0edd7dee43f110acc420893000d18add0a4f05171ca" gracePeriod=10 Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.258553 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8bn65" event={"ID":"902a4567-228a-43e0-b6c4-c323c4366c94","Type":"ContainerStarted","Data":"b521947da76ad5af6d94183a514103fd7676f5dab5e26d62fd82aa58fce16584"} Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.262366 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pxqhk" event={"ID":"5285f1c4-f873-488b-bb55-643779ff8672","Type":"ContainerStarted","Data":"287380c3ec074c5c596ec45de04102841a68316200401bae503db9b7e831f9d9"} Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.262442 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pxqhk" event={"ID":"5285f1c4-f873-488b-bb55-643779ff8672","Type":"ContainerStarted","Data":"03d5684fcd8a4515ddc15731e973e5479ecff36df5d154ae4cd7b12391055bcc"} Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.275779 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-k2l9n"] Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.322779 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kggqc"] Nov 24 09:11:18 crc kubenswrapper[4719]: W1124 09:11:18.335099 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84a9592e_0967_49ec_a421_66e027b6d56a.slice/crio-beb78ebfe88d8d01e2847ee2a6df85c4052281fb2b83e844e84b44ca43a49d02 WatchSource:0}: Error finding container beb78ebfe88d8d01e2847ee2a6df85c4052281fb2b83e844e84b44ca43a49d02: Status 404 returned error can't find the container with id beb78ebfe88d8d01e2847ee2a6df85c4052281fb2b83e844e84b44ca43a49d02 Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.348875 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pxqhk" podStartSLOduration=2.3488546169999998 podStartE2EDuration="2.348854617s" podCreationTimestamp="2025-11-24 09:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:18.305462239 +0000 UTC m=+1054.636735491" watchObservedRunningTime="2025-11-24 09:11:18.348854617 +0000 UTC m=+1054.680127869" Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.418709 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ht2vd"] Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.431401 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.557640 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp"] Nov 24 09:11:18 crc kubenswrapper[4719]: W1124 09:11:18.563776 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb6461b8_f751_4248_a4fc_fe1b3b987706.slice/crio-929f7d53033cdf45a8f0dbaa9d7128edf3832ed01a44805557c918c95f4d54ba WatchSource:0}: Error finding container 929f7d53033cdf45a8f0dbaa9d7128edf3832ed01a44805557c918c95f4d54ba: Status 404 returned error can't find the container with id 929f7d53033cdf45a8f0dbaa9d7128edf3832ed01a44805557c918c95f4d54ba Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.819556 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.930839 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-ovsdbserver-sb\") pod \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.930967 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-dns-svc\") pod \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.931029 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn69p\" (UniqueName: \"kubernetes.io/projected/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-kube-api-access-qn69p\") pod \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.931096 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-config\") pod \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.931151 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-ovsdbserver-nb\") pod \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\" (UID: \"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00\") " Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.938486 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-kube-api-access-qn69p" (OuterVolumeSpecName: "kube-api-access-qn69p") pod "b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00" (UID: "b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00"). InnerVolumeSpecName "kube-api-access-qn69p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.966004 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00" (UID: "b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:18 crc kubenswrapper[4719]: I1124 09:11:18.986584 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00" (UID: "b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.020287 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00" (UID: "b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.020813 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-config" (OuterVolumeSpecName: "config") pod "b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00" (UID: "b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.032595 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.032844 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.032907 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.032976 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.033054 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn69p\" (UniqueName: \"kubernetes.io/projected/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00-kube-api-access-qn69p\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.083909 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:11:19 crc kubenswrapper[4719]: E1124 09:11:19.242741 4719 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb6461b8_f751_4248_a4fc_fe1b3b987706.slice/crio-f98f13014f042ea36032b4229c2d22cb6aed65e7031aa956729e88394a2dd9d0.scope\": RecentStats: unable to find data in memory cache]" Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.291592 4719 generic.go:334] "Generic (PLEG): container finished" podID="b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00" containerID="f5a17677d4fdb04be391d0edd7dee43f110acc420893000d18add0a4f05171ca" exitCode=0 Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.291842 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-dqzq7" event={"ID":"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00","Type":"ContainerDied","Data":"f5a17677d4fdb04be391d0edd7dee43f110acc420893000d18add0a4f05171ca"} Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.291871 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-dqzq7" event={"ID":"b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00","Type":"ContainerDied","Data":"81fe4c22917a7365b809771575b827daac19fad8fd8fba706e51870d6a391995"} Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.291886 4719 scope.go:117] "RemoveContainer" containerID="f5a17677d4fdb04be391d0edd7dee43f110acc420893000d18add0a4f05171ca" Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.292005 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-dqzq7" Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.340463 4719 generic.go:334] "Generic (PLEG): container finished" podID="db6461b8-f751-4248-a4fc-fe1b3b987706" containerID="f98f13014f042ea36032b4229c2d22cb6aed65e7031aa956729e88394a2dd9d0" exitCode=0 Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.340555 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" event={"ID":"db6461b8-f751-4248-a4fc-fe1b3b987706","Type":"ContainerDied","Data":"f98f13014f042ea36032b4229c2d22cb6aed65e7031aa956729e88394a2dd9d0"} Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.340585 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" event={"ID":"db6461b8-f751-4248-a4fc-fe1b3b987706","Type":"ContainerStarted","Data":"929f7d53033cdf45a8f0dbaa9d7128edf3832ed01a44805557c918c95f4d54ba"} Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.389518 4719 scope.go:117] "RemoveContainer" containerID="f5a17677d4fdb04be391d0edd7dee43f110acc420893000d18add0a4f05171ca" Nov 24 09:11:19 crc kubenswrapper[4719]: E1124 09:11:19.391777 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5a17677d4fdb04be391d0edd7dee43f110acc420893000d18add0a4f05171ca\": container with ID starting with f5a17677d4fdb04be391d0edd7dee43f110acc420893000d18add0a4f05171ca not found: ID does not exist" containerID="f5a17677d4fdb04be391d0edd7dee43f110acc420893000d18add0a4f05171ca" Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.391818 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5a17677d4fdb04be391d0edd7dee43f110acc420893000d18add0a4f05171ca"} err="failed to get container status \"f5a17677d4fdb04be391d0edd7dee43f110acc420893000d18add0a4f05171ca\": rpc error: code = NotFound desc = could not find container \"f5a17677d4fdb04be391d0edd7dee43f110acc420893000d18add0a4f05171ca\": container with ID starting with f5a17677d4fdb04be391d0edd7dee43f110acc420893000d18add0a4f05171ca not found: ID does not exist" Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.397233 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-dqzq7"] Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.398791 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-k2l9n" event={"ID":"32da9e0b-97ee-48e0-bdd2-2c21bb019294","Type":"ContainerStarted","Data":"d2d6692fa00534dc12ffb23def6ee8755851aa7601abdb202c2bf066688f9a82"} Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.398817 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-k2l9n" event={"ID":"32da9e0b-97ee-48e0-bdd2-2c21bb019294","Type":"ContainerStarted","Data":"b81649a26b8ac29ae1528e95df6223de5226dc5b4fa375d1dcdbfbad9657c85d"} Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.414450 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-dqzq7"] Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.425226 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kggqc" event={"ID":"84a9592e-0967-49ec-a421-66e027b6d56a","Type":"ContainerStarted","Data":"beb78ebfe88d8d01e2847ee2a6df85c4052281fb2b83e844e84b44ca43a49d02"} Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.479311 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-k2l9n" podStartSLOduration=3.479292843 podStartE2EDuration="3.479292843s" podCreationTimestamp="2025-11-24 09:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:19.456309326 +0000 UTC m=+1055.787582568" watchObservedRunningTime="2025-11-24 09:11:19.479292843 +0000 UTC m=+1055.810566095" Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.537321 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ht2vd" event={"ID":"2e1bf4ab-344c-4335-b16a-828d28141f11","Type":"ContainerStarted","Data":"026df5b170cfa8de20a88617513700bc5667383cd74aaa620f0977a508471309"} Nov 24 09:11:19 crc kubenswrapper[4719]: I1124 09:11:19.558863 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08","Type":"ContainerStarted","Data":"4499f7b6619f9de917b0a571c2bac985339e61f6c48bfc4eef7c2d2b89e496c9"} Nov 24 09:11:20 crc kubenswrapper[4719]: I1124 09:11:20.535189 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00" path="/var/lib/kubelet/pods/b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00/volumes" Nov 24 09:11:20 crc kubenswrapper[4719]: I1124 09:11:20.584134 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" event={"ID":"db6461b8-f751-4248-a4fc-fe1b3b987706","Type":"ContainerStarted","Data":"69eb6bd56f637c44c472fcb1d0df869698d9bc78869a53d3e67b04cbfa723713"} Nov 24 09:11:20 crc kubenswrapper[4719]: I1124 09:11:20.584280 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:20 crc kubenswrapper[4719]: I1124 09:11:20.605506 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" podStartSLOduration=3.605488256 podStartE2EDuration="3.605488256s" podCreationTimestamp="2025-11-24 09:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:20.598939326 +0000 UTC m=+1056.930212598" watchObservedRunningTime="2025-11-24 09:11:20.605488256 +0000 UTC m=+1056.936761508" Nov 24 09:11:23 crc kubenswrapper[4719]: I1124 09:11:23.636421 4719 generic.go:334] "Generic (PLEG): container finished" podID="5285f1c4-f873-488b-bb55-643779ff8672" containerID="287380c3ec074c5c596ec45de04102841a68316200401bae503db9b7e831f9d9" exitCode=0 Nov 24 09:11:23 crc kubenswrapper[4719]: I1124 09:11:23.636515 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pxqhk" event={"ID":"5285f1c4-f873-488b-bb55-643779ff8672","Type":"ContainerDied","Data":"287380c3ec074c5c596ec45de04102841a68316200401bae503db9b7e831f9d9"} Nov 24 09:11:27 crc kubenswrapper[4719]: I1124 09:11:27.557226 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:11:27 crc kubenswrapper[4719]: I1124 09:11:27.619299 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-znrbz"] Nov 24 09:11:27 crc kubenswrapper[4719]: I1124 09:11:27.619601 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" podUID="dfd6020d-d20f-434a-8a51-b78a86354104" containerName="dnsmasq-dns" containerID="cri-o://9659d04e5ade151792fe5fa78bf7054a8fdfa214040d322feb16a96e8f819816" gracePeriod=10 Nov 24 09:11:28 crc kubenswrapper[4719]: I1124 09:11:28.690750 4719 generic.go:334] "Generic (PLEG): container finished" podID="dfd6020d-d20f-434a-8a51-b78a86354104" containerID="9659d04e5ade151792fe5fa78bf7054a8fdfa214040d322feb16a96e8f819816" exitCode=0 Nov 24 09:11:28 crc kubenswrapper[4719]: I1124 09:11:28.690800 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" event={"ID":"dfd6020d-d20f-434a-8a51-b78a86354104","Type":"ContainerDied","Data":"9659d04e5ade151792fe5fa78bf7054a8fdfa214040d322feb16a96e8f819816"} Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.654410 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.700252 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pxqhk" event={"ID":"5285f1c4-f873-488b-bb55-643779ff8672","Type":"ContainerDied","Data":"03d5684fcd8a4515ddc15731e973e5479ecff36df5d154ae4cd7b12391055bcc"} Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.700286 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03d5684fcd8a4515ddc15731e973e5479ecff36df5d154ae4cd7b12391055bcc" Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.700320 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pxqhk" Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.785339 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-scripts\") pod \"5285f1c4-f873-488b-bb55-643779ff8672\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.785599 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-config-data\") pod \"5285f1c4-f873-488b-bb55-643779ff8672\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.785735 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-fernet-keys\") pod \"5285f1c4-f873-488b-bb55-643779ff8672\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.785797 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-credential-keys\") pod \"5285f1c4-f873-488b-bb55-643779ff8672\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.785838 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtn67\" (UniqueName: \"kubernetes.io/projected/5285f1c4-f873-488b-bb55-643779ff8672-kube-api-access-jtn67\") pod \"5285f1c4-f873-488b-bb55-643779ff8672\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.785855 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-combined-ca-bundle\") pod \"5285f1c4-f873-488b-bb55-643779ff8672\" (UID: \"5285f1c4-f873-488b-bb55-643779ff8672\") " Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.793079 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5285f1c4-f873-488b-bb55-643779ff8672" (UID: "5285f1c4-f873-488b-bb55-643779ff8672"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.795421 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5285f1c4-f873-488b-bb55-643779ff8672-kube-api-access-jtn67" (OuterVolumeSpecName: "kube-api-access-jtn67") pod "5285f1c4-f873-488b-bb55-643779ff8672" (UID: "5285f1c4-f873-488b-bb55-643779ff8672"). InnerVolumeSpecName "kube-api-access-jtn67". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.811531 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5285f1c4-f873-488b-bb55-643779ff8672" (UID: "5285f1c4-f873-488b-bb55-643779ff8672"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.814242 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-scripts" (OuterVolumeSpecName: "scripts") pod "5285f1c4-f873-488b-bb55-643779ff8672" (UID: "5285f1c4-f873-488b-bb55-643779ff8672"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.848217 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5285f1c4-f873-488b-bb55-643779ff8672" (UID: "5285f1c4-f873-488b-bb55-643779ff8672"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.868021 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-config-data" (OuterVolumeSpecName: "config-data") pod "5285f1c4-f873-488b-bb55-643779ff8672" (UID: "5285f1c4-f873-488b-bb55-643779ff8672"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.887162 4719 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.887197 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtn67\" (UniqueName: \"kubernetes.io/projected/5285f1c4-f873-488b-bb55-643779ff8672-kube-api-access-jtn67\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.887209 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.887217 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.887225 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:29 crc kubenswrapper[4719]: I1124 09:11:29.887233 4719 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5285f1c4-f873-488b-bb55-643779ff8672-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:30 crc kubenswrapper[4719]: I1124 09:11:30.841366 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pxqhk"] Nov 24 09:11:30 crc kubenswrapper[4719]: I1124 09:11:30.850503 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pxqhk"] Nov 24 09:11:30 crc kubenswrapper[4719]: I1124 09:11:30.935310 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jcgws"] Nov 24 09:11:30 crc kubenswrapper[4719]: E1124 09:11:30.935641 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5285f1c4-f873-488b-bb55-643779ff8672" containerName="keystone-bootstrap" Nov 24 09:11:30 crc kubenswrapper[4719]: I1124 09:11:30.935657 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5285f1c4-f873-488b-bb55-643779ff8672" containerName="keystone-bootstrap" Nov 24 09:11:30 crc kubenswrapper[4719]: E1124 09:11:30.935674 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00" containerName="init" Nov 24 09:11:30 crc kubenswrapper[4719]: I1124 09:11:30.935694 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00" containerName="init" Nov 24 09:11:30 crc kubenswrapper[4719]: I1124 09:11:30.935849 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5c96da9-e5b2-4ef8-b4c4-ae8f0868ea00" containerName="init" Nov 24 09:11:30 crc kubenswrapper[4719]: I1124 09:11:30.935866 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="5285f1c4-f873-488b-bb55-643779ff8672" containerName="keystone-bootstrap" Nov 24 09:11:30 crc kubenswrapper[4719]: I1124 09:11:30.936438 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:30 crc kubenswrapper[4719]: I1124 09:11:30.938459 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 09:11:30 crc kubenswrapper[4719]: I1124 09:11:30.938711 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 09:11:30 crc kubenswrapper[4719]: I1124 09:11:30.938730 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-d4gqc" Nov 24 09:11:30 crc kubenswrapper[4719]: I1124 09:11:30.938893 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 09:11:30 crc kubenswrapper[4719]: I1124 09:11:30.940480 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 09:11:30 crc kubenswrapper[4719]: I1124 09:11:30.966758 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jcgws"] Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.009219 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4x8v\" (UniqueName: \"kubernetes.io/projected/ddb9444b-a866-41c9-af6d-831061243d3c-kube-api-access-q4x8v\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.009435 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-config-data\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.009568 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-fernet-keys\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.009700 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-scripts\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.009798 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-combined-ca-bundle\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.009891 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-credential-keys\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.111310 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-credential-keys\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.111498 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4x8v\" (UniqueName: \"kubernetes.io/projected/ddb9444b-a866-41c9-af6d-831061243d3c-kube-api-access-q4x8v\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.111528 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-config-data\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.111568 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-fernet-keys\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.111602 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-scripts\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.111647 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-combined-ca-bundle\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.115602 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-scripts\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.115777 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-combined-ca-bundle\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.118800 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-fernet-keys\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.122066 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-config-data\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.127124 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-credential-keys\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.129672 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4x8v\" (UniqueName: \"kubernetes.io/projected/ddb9444b-a866-41c9-af6d-831061243d3c-kube-api-access-q4x8v\") pod \"keystone-bootstrap-jcgws\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:31 crc kubenswrapper[4719]: I1124 09:11:31.254353 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:32 crc kubenswrapper[4719]: I1124 09:11:32.533772 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5285f1c4-f873-488b-bb55-643779ff8672" path="/var/lib/kubelet/pods/5285f1c4-f873-488b-bb55-643779ff8672/volumes" Nov 24 09:11:35 crc kubenswrapper[4719]: I1124 09:11:35.708675 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" podUID="dfd6020d-d20f-434a-8a51-b78a86354104" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.406802 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.538997 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-config\") pod \"dfd6020d-d20f-434a-8a51-b78a86354104\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.539097 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-ovsdbserver-nb\") pod \"dfd6020d-d20f-434a-8a51-b78a86354104\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.539183 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-ovsdbserver-sb\") pod \"dfd6020d-d20f-434a-8a51-b78a86354104\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.539274 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-dns-svc\") pod \"dfd6020d-d20f-434a-8a51-b78a86354104\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.539335 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll9zz\" (UniqueName: \"kubernetes.io/projected/dfd6020d-d20f-434a-8a51-b78a86354104-kube-api-access-ll9zz\") pod \"dfd6020d-d20f-434a-8a51-b78a86354104\" (UID: \"dfd6020d-d20f-434a-8a51-b78a86354104\") " Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.548200 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd6020d-d20f-434a-8a51-b78a86354104-kube-api-access-ll9zz" (OuterVolumeSpecName: "kube-api-access-ll9zz") pod "dfd6020d-d20f-434a-8a51-b78a86354104" (UID: "dfd6020d-d20f-434a-8a51-b78a86354104"). InnerVolumeSpecName "kube-api-access-ll9zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.596016 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-config" (OuterVolumeSpecName: "config") pod "dfd6020d-d20f-434a-8a51-b78a86354104" (UID: "dfd6020d-d20f-434a-8a51-b78a86354104"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.598521 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dfd6020d-d20f-434a-8a51-b78a86354104" (UID: "dfd6020d-d20f-434a-8a51-b78a86354104"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.606809 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dfd6020d-d20f-434a-8a51-b78a86354104" (UID: "dfd6020d-d20f-434a-8a51-b78a86354104"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.616525 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dfd6020d-d20f-434a-8a51-b78a86354104" (UID: "dfd6020d-d20f-434a-8a51-b78a86354104"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.641783 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.641819 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.641830 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.641838 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfd6020d-d20f-434a-8a51-b78a86354104-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.641848 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll9zz\" (UniqueName: \"kubernetes.io/projected/dfd6020d-d20f-434a-8a51-b78a86354104-kube-api-access-ll9zz\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.776769 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" event={"ID":"dfd6020d-d20f-434a-8a51-b78a86354104","Type":"ContainerDied","Data":"76efd6987bad0ad1cd681e1ba728c0acae7cce3d231e426a86e08cad07d26696"} Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.776838 4719 scope.go:117] "RemoveContainer" containerID="9659d04e5ade151792fe5fa78bf7054a8fdfa214040d322feb16a96e8f819816" Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.776853 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.825248 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-znrbz"] Nov 24 09:11:37 crc kubenswrapper[4719]: I1124 09:11:37.834572 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-znrbz"] Nov 24 09:11:38 crc kubenswrapper[4719]: I1124 09:11:38.532735 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfd6020d-d20f-434a-8a51-b78a86354104" path="/var/lib/kubelet/pods/dfd6020d-d20f-434a-8a51-b78a86354104/volumes" Nov 24 09:11:38 crc kubenswrapper[4719]: I1124 09:11:38.545422 4719 scope.go:117] "RemoveContainer" containerID="918e316484ce13cb4b45ce3779c1a4fec3b63bbe68c5f787868b52b4f824bf5e" Nov 24 09:11:38 crc kubenswrapper[4719]: E1124 09:11:38.555543 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 24 09:11:38 crc kubenswrapper[4719]: E1124 09:11:38.555814 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2s6sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-8bn65_openstack(902a4567-228a-43e0-b6c4-c323c4366c94): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:11:38 crc kubenswrapper[4719]: E1124 09:11:38.557259 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-8bn65" podUID="902a4567-228a-43e0-b6c4-c323c4366c94" Nov 24 09:11:38 crc kubenswrapper[4719]: E1124 09:11:38.799487 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-8bn65" podUID="902a4567-228a-43e0-b6c4-c323c4366c94" Nov 24 09:11:39 crc kubenswrapper[4719]: I1124 09:11:39.067051 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jcgws"] Nov 24 09:11:39 crc kubenswrapper[4719]: W1124 09:11:39.072852 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddb9444b_a866_41c9_af6d_831061243d3c.slice/crio-6421ecaa655775e27d3f9523f7bb3249e869679e2e8c51a77277b9ea35b8ec41 WatchSource:0}: Error finding container 6421ecaa655775e27d3f9523f7bb3249e869679e2e8c51a77277b9ea35b8ec41: Status 404 returned error can't find the container with id 6421ecaa655775e27d3f9523f7bb3249e869679e2e8c51a77277b9ea35b8ec41 Nov 24 09:11:39 crc kubenswrapper[4719]: I1124 09:11:39.806599 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jcgws" event={"ID":"ddb9444b-a866-41c9-af6d-831061243d3c","Type":"ContainerStarted","Data":"1c2b454f96566e0f7f527de9b6ce08e339cbd2b34451cb98829c77dbc7327c82"} Nov 24 09:11:39 crc kubenswrapper[4719]: I1124 09:11:39.806947 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jcgws" event={"ID":"ddb9444b-a866-41c9-af6d-831061243d3c","Type":"ContainerStarted","Data":"6421ecaa655775e27d3f9523f7bb3249e869679e2e8c51a77277b9ea35b8ec41"} Nov 24 09:11:39 crc kubenswrapper[4719]: I1124 09:11:39.813591 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kggqc" event={"ID":"84a9592e-0967-49ec-a421-66e027b6d56a","Type":"ContainerStarted","Data":"b854ce9f7d89a39993476d675b4312e386b3801aef8b2c845902af90e55cdc18"} Nov 24 09:11:39 crc kubenswrapper[4719]: I1124 09:11:39.820413 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ht2vd" event={"ID":"2e1bf4ab-344c-4335-b16a-828d28141f11","Type":"ContainerStarted","Data":"249bd316aa3178b10dabe1da063dfc5c37b759599c82c1bcb717ec8164f6fa7b"} Nov 24 09:11:39 crc kubenswrapper[4719]: I1124 09:11:39.831679 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jcgws" podStartSLOduration=9.831662405 podStartE2EDuration="9.831662405s" podCreationTimestamp="2025-11-24 09:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:39.826988079 +0000 UTC m=+1076.158261341" watchObservedRunningTime="2025-11-24 09:11:39.831662405 +0000 UTC m=+1076.162935657" Nov 24 09:11:39 crc kubenswrapper[4719]: I1124 09:11:39.835761 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08","Type":"ContainerStarted","Data":"7f11e627c6a4276a05cd5af15840dc44bbaa607f65ce24c5e48be532c044e5f4"} Nov 24 09:11:39 crc kubenswrapper[4719]: I1124 09:11:39.851407 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-ht2vd" podStartSLOduration=2.708447455 podStartE2EDuration="22.851385476s" podCreationTimestamp="2025-11-24 09:11:17 +0000 UTC" firstStartedPulling="2025-11-24 09:11:18.416531129 +0000 UTC m=+1054.747804381" lastFinishedPulling="2025-11-24 09:11:38.55946915 +0000 UTC m=+1074.890742402" observedRunningTime="2025-11-24 09:11:39.847630397 +0000 UTC m=+1076.178903659" watchObservedRunningTime="2025-11-24 09:11:39.851385476 +0000 UTC m=+1076.182658738" Nov 24 09:11:39 crc kubenswrapper[4719]: I1124 09:11:39.877785 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-kggqc" podStartSLOduration=3.665189581 podStartE2EDuration="23.877766901s" podCreationTimestamp="2025-11-24 09:11:16 +0000 UTC" firstStartedPulling="2025-11-24 09:11:18.350972938 +0000 UTC m=+1054.682246180" lastFinishedPulling="2025-11-24 09:11:38.563550258 +0000 UTC m=+1074.894823500" observedRunningTime="2025-11-24 09:11:39.859976925 +0000 UTC m=+1076.191250167" watchObservedRunningTime="2025-11-24 09:11:39.877766901 +0000 UTC m=+1076.209040153" Nov 24 09:11:40 crc kubenswrapper[4719]: I1124 09:11:40.710049 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554567b4f7-znrbz" podUID="dfd6020d-d20f-434a-8a51-b78a86354104" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Nov 24 09:11:40 crc kubenswrapper[4719]: I1124 09:11:40.853450 4719 generic.go:334] "Generic (PLEG): container finished" podID="32da9e0b-97ee-48e0-bdd2-2c21bb019294" containerID="d2d6692fa00534dc12ffb23def6ee8755851aa7601abdb202c2bf066688f9a82" exitCode=0 Nov 24 09:11:40 crc kubenswrapper[4719]: I1124 09:11:40.853507 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-k2l9n" event={"ID":"32da9e0b-97ee-48e0-bdd2-2c21bb019294","Type":"ContainerDied","Data":"d2d6692fa00534dc12ffb23def6ee8755851aa7601abdb202c2bf066688f9a82"} Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.216566 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-k2l9n" Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.315290 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32da9e0b-97ee-48e0-bdd2-2c21bb019294-combined-ca-bundle\") pod \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\" (UID: \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\") " Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.316171 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32da9e0b-97ee-48e0-bdd2-2c21bb019294-config\") pod \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\" (UID: \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\") " Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.316545 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsj2x\" (UniqueName: \"kubernetes.io/projected/32da9e0b-97ee-48e0-bdd2-2c21bb019294-kube-api-access-gsj2x\") pod \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\" (UID: \"32da9e0b-97ee-48e0-bdd2-2c21bb019294\") " Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.324216 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32da9e0b-97ee-48e0-bdd2-2c21bb019294-kube-api-access-gsj2x" (OuterVolumeSpecName: "kube-api-access-gsj2x") pod "32da9e0b-97ee-48e0-bdd2-2c21bb019294" (UID: "32da9e0b-97ee-48e0-bdd2-2c21bb019294"). InnerVolumeSpecName "kube-api-access-gsj2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.337679 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32da9e0b-97ee-48e0-bdd2-2c21bb019294-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32da9e0b-97ee-48e0-bdd2-2c21bb019294" (UID: "32da9e0b-97ee-48e0-bdd2-2c21bb019294"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.347191 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32da9e0b-97ee-48e0-bdd2-2c21bb019294-config" (OuterVolumeSpecName: "config") pod "32da9e0b-97ee-48e0-bdd2-2c21bb019294" (UID: "32da9e0b-97ee-48e0-bdd2-2c21bb019294"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.439857 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsj2x\" (UniqueName: \"kubernetes.io/projected/32da9e0b-97ee-48e0-bdd2-2c21bb019294-kube-api-access-gsj2x\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.439889 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32da9e0b-97ee-48e0-bdd2-2c21bb019294-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.439898 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/32da9e0b-97ee-48e0-bdd2-2c21bb019294-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.871516 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-k2l9n" Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.872189 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-k2l9n" event={"ID":"32da9e0b-97ee-48e0-bdd2-2c21bb019294","Type":"ContainerDied","Data":"b81649a26b8ac29ae1528e95df6223de5226dc5b4fa375d1dcdbfbad9657c85d"} Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.872218 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b81649a26b8ac29ae1528e95df6223de5226dc5b4fa375d1dcdbfbad9657c85d" Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.873624 4719 generic.go:334] "Generic (PLEG): container finished" podID="2e1bf4ab-344c-4335-b16a-828d28141f11" containerID="249bd316aa3178b10dabe1da063dfc5c37b759599c82c1bcb717ec8164f6fa7b" exitCode=0 Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.873664 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ht2vd" event={"ID":"2e1bf4ab-344c-4335-b16a-828d28141f11","Type":"ContainerDied","Data":"249bd316aa3178b10dabe1da063dfc5c37b759599c82c1bcb717ec8164f6fa7b"} Nov 24 09:11:42 crc kubenswrapper[4719]: I1124 09:11:42.876305 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08","Type":"ContainerStarted","Data":"219c4c45c6d0f7689efd72214412a91e927b78c719d636cf8ee0ddc068a3b715"} Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.058095 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fv2jx"] Nov 24 09:11:43 crc kubenswrapper[4719]: E1124 09:11:43.058392 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32da9e0b-97ee-48e0-bdd2-2c21bb019294" containerName="neutron-db-sync" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.058404 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="32da9e0b-97ee-48e0-bdd2-2c21bb019294" containerName="neutron-db-sync" Nov 24 09:11:43 crc kubenswrapper[4719]: E1124 09:11:43.058424 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd6020d-d20f-434a-8a51-b78a86354104" containerName="dnsmasq-dns" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.058430 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd6020d-d20f-434a-8a51-b78a86354104" containerName="dnsmasq-dns" Nov 24 09:11:43 crc kubenswrapper[4719]: E1124 09:11:43.058448 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd6020d-d20f-434a-8a51-b78a86354104" containerName="init" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.058455 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd6020d-d20f-434a-8a51-b78a86354104" containerName="init" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.058613 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfd6020d-d20f-434a-8a51-b78a86354104" containerName="dnsmasq-dns" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.058635 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="32da9e0b-97ee-48e0-bdd2-2c21bb019294" containerName="neutron-db-sync" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.059469 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.069295 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fv2jx"] Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.158755 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.159133 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.159196 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.159290 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-config\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.159347 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw2b9\" (UniqueName: \"kubernetes.io/projected/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-kube-api-access-kw2b9\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.261667 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.261758 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.261855 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-config\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.261890 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw2b9\" (UniqueName: \"kubernetes.io/projected/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-kube-api-access-kw2b9\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.261913 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.263479 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.265189 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.265210 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.265970 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-config\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.301227 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw2b9\" (UniqueName: \"kubernetes.io/projected/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-kube-api-access-kw2b9\") pod \"dnsmasq-dns-5f66db59b9-fv2jx\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.408470 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.434900 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d4bdff97d-5nfdc"] Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.436251 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.439911 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d798x" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.441770 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.442054 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.442190 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.447417 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d4bdff97d-5nfdc"] Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.572522 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-config\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.572558 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-httpd-config\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.572638 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-combined-ca-bundle\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.572669 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwgcv\" (UniqueName: \"kubernetes.io/projected/09971473-24eb-4506-8257-8fe16cdc271a-kube-api-access-gwgcv\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.572915 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-ovndb-tls-certs\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.676124 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-config\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.676487 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-httpd-config\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.676581 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-combined-ca-bundle\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.676618 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwgcv\" (UniqueName: \"kubernetes.io/projected/09971473-24eb-4506-8257-8fe16cdc271a-kube-api-access-gwgcv\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.676728 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-ovndb-tls-certs\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.683461 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-ovndb-tls-certs\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.684112 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-httpd-config\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.692005 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-config\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.697148 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-combined-ca-bundle\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.730758 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwgcv\" (UniqueName: \"kubernetes.io/projected/09971473-24eb-4506-8257-8fe16cdc271a-kube-api-access-gwgcv\") pod \"neutron-6d4bdff97d-5nfdc\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:43 crc kubenswrapper[4719]: I1124 09:11:43.815397 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.118437 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fv2jx"] Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.458454 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.491652 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-combined-ca-bundle\") pod \"2e1bf4ab-344c-4335-b16a-828d28141f11\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.491692 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-scripts\") pod \"2e1bf4ab-344c-4335-b16a-828d28141f11\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.491723 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqkt4\" (UniqueName: \"kubernetes.io/projected/2e1bf4ab-344c-4335-b16a-828d28141f11-kube-api-access-tqkt4\") pod \"2e1bf4ab-344c-4335-b16a-828d28141f11\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.491779 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e1bf4ab-344c-4335-b16a-828d28141f11-logs\") pod \"2e1bf4ab-344c-4335-b16a-828d28141f11\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.491809 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-config-data\") pod \"2e1bf4ab-344c-4335-b16a-828d28141f11\" (UID: \"2e1bf4ab-344c-4335-b16a-828d28141f11\") " Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.493963 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e1bf4ab-344c-4335-b16a-828d28141f11-logs" (OuterVolumeSpecName: "logs") pod "2e1bf4ab-344c-4335-b16a-828d28141f11" (UID: "2e1bf4ab-344c-4335-b16a-828d28141f11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.501360 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e1bf4ab-344c-4335-b16a-828d28141f11-kube-api-access-tqkt4" (OuterVolumeSpecName: "kube-api-access-tqkt4") pod "2e1bf4ab-344c-4335-b16a-828d28141f11" (UID: "2e1bf4ab-344c-4335-b16a-828d28141f11"). InnerVolumeSpecName "kube-api-access-tqkt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.520223 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-scripts" (OuterVolumeSpecName: "scripts") pod "2e1bf4ab-344c-4335-b16a-828d28141f11" (UID: "2e1bf4ab-344c-4335-b16a-828d28141f11"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:44 crc kubenswrapper[4719]: W1124 09:11:44.539759 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09971473_24eb_4506_8257_8fe16cdc271a.slice/crio-a5ae7fece151b5a0d7ff015ea3c41ec4bffaac73b10727dd6f68bb2a6c80404e WatchSource:0}: Error finding container a5ae7fece151b5a0d7ff015ea3c41ec4bffaac73b10727dd6f68bb2a6c80404e: Status 404 returned error can't find the container with id a5ae7fece151b5a0d7ff015ea3c41ec4bffaac73b10727dd6f68bb2a6c80404e Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.555623 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-config-data" (OuterVolumeSpecName: "config-data") pod "2e1bf4ab-344c-4335-b16a-828d28141f11" (UID: "2e1bf4ab-344c-4335-b16a-828d28141f11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.559143 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d4bdff97d-5nfdc"] Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.577602 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e1bf4ab-344c-4335-b16a-828d28141f11" (UID: "2e1bf4ab-344c-4335-b16a-828d28141f11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.599286 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.599312 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqkt4\" (UniqueName: \"kubernetes.io/projected/2e1bf4ab-344c-4335-b16a-828d28141f11-kube-api-access-tqkt4\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.599323 4719 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e1bf4ab-344c-4335-b16a-828d28141f11-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.599331 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.599339 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1bf4ab-344c-4335-b16a-828d28141f11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.929857 4719 generic.go:334] "Generic (PLEG): container finished" podID="84a9592e-0967-49ec-a421-66e027b6d56a" containerID="b854ce9f7d89a39993476d675b4312e386b3801aef8b2c845902af90e55cdc18" exitCode=0 Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.929947 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kggqc" event={"ID":"84a9592e-0967-49ec-a421-66e027b6d56a","Type":"ContainerDied","Data":"b854ce9f7d89a39993476d675b4312e386b3801aef8b2c845902af90e55cdc18"} Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.933419 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ht2vd" event={"ID":"2e1bf4ab-344c-4335-b16a-828d28141f11","Type":"ContainerDied","Data":"026df5b170cfa8de20a88617513700bc5667383cd74aaa620f0977a508471309"} Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.933446 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="026df5b170cfa8de20a88617513700bc5667383cd74aaa620f0977a508471309" Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.933491 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ht2vd" Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.958797 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdff97d-5nfdc" event={"ID":"09971473-24eb-4506-8257-8fe16cdc271a","Type":"ContainerStarted","Data":"788c5d4a28562e5d24d9e87968405ccc4f9346c9a29586f3f2d680b77756ac1b"} Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.958844 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdff97d-5nfdc" event={"ID":"09971473-24eb-4506-8257-8fe16cdc271a","Type":"ContainerStarted","Data":"a5ae7fece151b5a0d7ff015ea3c41ec4bffaac73b10727dd6f68bb2a6c80404e"} Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.963759 4719 generic.go:334] "Generic (PLEG): container finished" podID="8ee1a0f0-a05b-42b8-aa93-af2c12c699b5" containerID="24bbd421067463b8283b447939bc07d61aee18a2d6fefbfa0a7b7f37e0bf8eb3" exitCode=0 Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.963805 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" event={"ID":"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5","Type":"ContainerDied","Data":"24bbd421067463b8283b447939bc07d61aee18a2d6fefbfa0a7b7f37e0bf8eb3"} Nov 24 09:11:44 crc kubenswrapper[4719]: I1124 09:11:44.963845 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" event={"ID":"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5","Type":"ContainerStarted","Data":"15d1ef1829bc4a91fb1fc9cf2d8dac32ce9de4ee4edb5fd7aff507d741f88330"} Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.080401 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5478d99856-2md7b"] Nov 24 09:11:45 crc kubenswrapper[4719]: E1124 09:11:45.080893 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1bf4ab-344c-4335-b16a-828d28141f11" containerName="placement-db-sync" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.080911 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1bf4ab-344c-4335-b16a-828d28141f11" containerName="placement-db-sync" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.081079 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e1bf4ab-344c-4335-b16a-828d28141f11" containerName="placement-db-sync" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.115398 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5478d99856-2md7b"] Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.115616 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.123211 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-27d8t" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.123470 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.123644 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.124383 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.126219 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.221157 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-config-data\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.221204 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d70d9227-aa5e-4855-b4de-8bb688c24f34-logs\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.221250 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-677z7\" (UniqueName: \"kubernetes.io/projected/d70d9227-aa5e-4855-b4de-8bb688c24f34-kube-api-access-677z7\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.221280 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-scripts\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.221304 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-public-tls-certs\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.221330 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-internal-tls-certs\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.221395 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-combined-ca-bundle\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.323988 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-public-tls-certs\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.324055 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-internal-tls-certs\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.324113 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-combined-ca-bundle\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.324156 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-config-data\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.324184 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d70d9227-aa5e-4855-b4de-8bb688c24f34-logs\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.324208 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-677z7\" (UniqueName: \"kubernetes.io/projected/d70d9227-aa5e-4855-b4de-8bb688c24f34-kube-api-access-677z7\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.324235 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-scripts\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.327504 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d70d9227-aa5e-4855-b4de-8bb688c24f34-logs\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.332346 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-scripts\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.341320 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-public-tls-certs\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.343715 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-internal-tls-certs\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.349370 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-combined-ca-bundle\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.363529 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d70d9227-aa5e-4855-b4de-8bb688c24f34-config-data\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.364052 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-677z7\" (UniqueName: \"kubernetes.io/projected/d70d9227-aa5e-4855-b4de-8bb688c24f34-kube-api-access-677z7\") pod \"placement-5478d99856-2md7b\" (UID: \"d70d9227-aa5e-4855-b4de-8bb688c24f34\") " pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.494436 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.977067 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" event={"ID":"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5","Type":"ContainerStarted","Data":"ababbf549de8f1d27259d78ccc40ba823648c89650f2b3dff04a6bc5dc737cf1"} Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.977438 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.984479 4719 generic.go:334] "Generic (PLEG): container finished" podID="ddb9444b-a866-41c9-af6d-831061243d3c" containerID="1c2b454f96566e0f7f527de9b6ce08e339cbd2b34451cb98829c77dbc7327c82" exitCode=0 Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.984521 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jcgws" event={"ID":"ddb9444b-a866-41c9-af6d-831061243d3c","Type":"ContainerDied","Data":"1c2b454f96566e0f7f527de9b6ce08e339cbd2b34451cb98829c77dbc7327c82"} Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.986749 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdff97d-5nfdc" event={"ID":"09971473-24eb-4506-8257-8fe16cdc271a","Type":"ContainerStarted","Data":"4169d4b379514e806fa639d8c20ae168a9d9730f32cfc29abc68fb61c4d50221"} Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.986892 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:11:45 crc kubenswrapper[4719]: I1124 09:11:45.996817 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" podStartSLOduration=2.9967994559999998 podStartE2EDuration="2.996799456s" podCreationTimestamp="2025-11-24 09:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:45.994997763 +0000 UTC m=+1082.326271025" watchObservedRunningTime="2025-11-24 09:11:45.996799456 +0000 UTC m=+1082.328072708" Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.035824 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d4bdff97d-5nfdc" podStartSLOduration=3.035808497 podStartE2EDuration="3.035808497s" podCreationTimestamp="2025-11-24 09:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:46.032606964 +0000 UTC m=+1082.363880226" watchObservedRunningTime="2025-11-24 09:11:46.035808497 +0000 UTC m=+1082.367081749" Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.132336 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5478d99856-2md7b"] Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.398189 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kggqc" Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.462714 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a9592e-0967-49ec-a421-66e027b6d56a-combined-ca-bundle\") pod \"84a9592e-0967-49ec-a421-66e027b6d56a\" (UID: \"84a9592e-0967-49ec-a421-66e027b6d56a\") " Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.463059 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a9592e-0967-49ec-a421-66e027b6d56a-db-sync-config-data\") pod \"84a9592e-0967-49ec-a421-66e027b6d56a\" (UID: \"84a9592e-0967-49ec-a421-66e027b6d56a\") " Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.463247 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjswt\" (UniqueName: \"kubernetes.io/projected/84a9592e-0967-49ec-a421-66e027b6d56a-kube-api-access-zjswt\") pod \"84a9592e-0967-49ec-a421-66e027b6d56a\" (UID: \"84a9592e-0967-49ec-a421-66e027b6d56a\") " Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.467424 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a9592e-0967-49ec-a421-66e027b6d56a-kube-api-access-zjswt" (OuterVolumeSpecName: "kube-api-access-zjswt") pod "84a9592e-0967-49ec-a421-66e027b6d56a" (UID: "84a9592e-0967-49ec-a421-66e027b6d56a"). InnerVolumeSpecName "kube-api-access-zjswt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.471212 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a9592e-0967-49ec-a421-66e027b6d56a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "84a9592e-0967-49ec-a421-66e027b6d56a" (UID: "84a9592e-0967-49ec-a421-66e027b6d56a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.495936 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a9592e-0967-49ec-a421-66e027b6d56a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84a9592e-0967-49ec-a421-66e027b6d56a" (UID: "84a9592e-0967-49ec-a421-66e027b6d56a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.567024 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjswt\" (UniqueName: \"kubernetes.io/projected/84a9592e-0967-49ec-a421-66e027b6d56a-kube-api-access-zjswt\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.567078 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a9592e-0967-49ec-a421-66e027b6d56a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.567091 4719 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/84a9592e-0967-49ec-a421-66e027b6d56a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.998197 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kggqc" event={"ID":"84a9592e-0967-49ec-a421-66e027b6d56a","Type":"ContainerDied","Data":"beb78ebfe88d8d01e2847ee2a6df85c4052281fb2b83e844e84b44ca43a49d02"} Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.998238 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beb78ebfe88d8d01e2847ee2a6df85c4052281fb2b83e844e84b44ca43a49d02" Nov 24 09:11:46 crc kubenswrapper[4719]: I1124 09:11:46.998296 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kggqc" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.011319 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5478d99856-2md7b" event={"ID":"d70d9227-aa5e-4855-b4de-8bb688c24f34","Type":"ContainerStarted","Data":"bd5e1478255949a186d491e20c3510d4ae7e4c78f805339a87f151e2c9cdcaf5"} Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.011363 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5478d99856-2md7b" event={"ID":"d70d9227-aa5e-4855-b4de-8bb688c24f34","Type":"ContainerStarted","Data":"a76886c2a97674acb29163d99f88d632750a2c245c66f2e1f1a9eda8c7aa8826"} Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.011373 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5478d99856-2md7b" event={"ID":"d70d9227-aa5e-4855-b4de-8bb688c24f34","Type":"ContainerStarted","Data":"491b4769e32a17174ea663354bbc761919afe0d77f40b1b1645e7c9f489656af"} Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.038334 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5478d99856-2md7b" podStartSLOduration=2.038316752 podStartE2EDuration="2.038316752s" podCreationTimestamp="2025-11-24 09:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:47.02892867 +0000 UTC m=+1083.360201942" watchObservedRunningTime="2025-11-24 09:11:47.038316752 +0000 UTC m=+1083.369589994" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.286399 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-68fd59f556-bvd2x"] Nov 24 09:11:47 crc kubenswrapper[4719]: E1124 09:11:47.287006 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a9592e-0967-49ec-a421-66e027b6d56a" containerName="barbican-db-sync" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.287017 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a9592e-0967-49ec-a421-66e027b6d56a" containerName="barbican-db-sync" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.287207 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a9592e-0967-49ec-a421-66e027b6d56a" containerName="barbican-db-sync" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.288013 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.300926 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-6rsp8" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.301193 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.303186 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.304185 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-55fc6d8c7-9576d"] Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.305649 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.315573 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.332168 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-68fd59f556-bvd2x"] Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.348729 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55fc6d8c7-9576d"] Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.374437 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fv2jx"] Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.392103 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6feeb8da-45f5-4eb9-bae3-5101afc7e021-config-data\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.392151 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-logs\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.392174 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-combined-ca-bundle\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.392211 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-config-data\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.392235 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6feeb8da-45f5-4eb9-bae3-5101afc7e021-combined-ca-bundle\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.392261 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6feeb8da-45f5-4eb9-bae3-5101afc7e021-config-data-custom\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.392279 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-config-data-custom\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.392304 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5fnr\" (UniqueName: \"kubernetes.io/projected/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-kube-api-access-r5fnr\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.392321 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqhbp\" (UniqueName: \"kubernetes.io/projected/6feeb8da-45f5-4eb9-bae3-5101afc7e021-kube-api-access-gqhbp\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.392340 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6feeb8da-45f5-4eb9-bae3-5101afc7e021-logs\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.495112 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-combined-ca-bundle\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.495183 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-config-data\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.495220 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6feeb8da-45f5-4eb9-bae3-5101afc7e021-combined-ca-bundle\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.495263 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6feeb8da-45f5-4eb9-bae3-5101afc7e021-config-data-custom\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.495288 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-config-data-custom\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.495332 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5fnr\" (UniqueName: \"kubernetes.io/projected/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-kube-api-access-r5fnr\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.495362 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqhbp\" (UniqueName: \"kubernetes.io/projected/6feeb8da-45f5-4eb9-bae3-5101afc7e021-kube-api-access-gqhbp\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.495391 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6feeb8da-45f5-4eb9-bae3-5101afc7e021-logs\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.495507 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6feeb8da-45f5-4eb9-bae3-5101afc7e021-config-data\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.495547 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-logs\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.495987 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-logs\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.496690 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6feeb8da-45f5-4eb9-bae3-5101afc7e021-logs\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.526970 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6feeb8da-45f5-4eb9-bae3-5101afc7e021-config-data-custom\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.540201 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6feeb8da-45f5-4eb9-bae3-5101afc7e021-combined-ca-bundle\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.543845 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869f779d85-lqxms"] Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.547950 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.553293 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5fnr\" (UniqueName: \"kubernetes.io/projected/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-kube-api-access-r5fnr\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.553967 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-combined-ca-bundle\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.559619 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-config-data\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.551028 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6feeb8da-45f5-4eb9-bae3-5101afc7e021-config-data\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.566232 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb-config-data-custom\") pod \"barbican-worker-55fc6d8c7-9576d\" (UID: \"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb\") " pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.588094 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-lqxms"] Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.598880 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqhbp\" (UniqueName: \"kubernetes.io/projected/6feeb8da-45f5-4eb9-bae3-5101afc7e021-kube-api-access-gqhbp\") pod \"barbican-keystone-listener-68fd59f556-bvd2x\" (UID: \"6feeb8da-45f5-4eb9-bae3-5101afc7e021\") " pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.604499 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-785454dbb-gxnhx"] Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.607939 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-dns-svc\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.608170 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsrxb\" (UniqueName: \"kubernetes.io/projected/f27e073e-ba9a-47c7-858a-b2a7a28e867f-kube-api-access-tsrxb\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.608231 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.608284 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-config\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.608310 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.636580 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-785454dbb-gxnhx"] Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.636695 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.643761 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.651607 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.700133 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55fc6d8c7-9576d" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.713351 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-config-data\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.713431 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-dns-svc\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.713474 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5744d56-d51c-4529-9753-440276861091-logs\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.713529 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsrxb\" (UniqueName: \"kubernetes.io/projected/f27e073e-ba9a-47c7-858a-b2a7a28e867f-kube-api-access-tsrxb\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.713556 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-config-data-custom\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.713585 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.713645 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-config\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.713666 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kslj\" (UniqueName: \"kubernetes.io/projected/f5744d56-d51c-4529-9753-440276861091-kube-api-access-4kslj\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.713690 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.713740 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-combined-ca-bundle\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.714858 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-dns-svc\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.715926 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.715934 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-config\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.716693 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.744401 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsrxb\" (UniqueName: \"kubernetes.io/projected/f27e073e-ba9a-47c7-858a-b2a7a28e867f-kube-api-access-tsrxb\") pod \"dnsmasq-dns-869f779d85-lqxms\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.751422 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.814616 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-credential-keys\") pod \"ddb9444b-a866-41c9-af6d-831061243d3c\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.814658 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4x8v\" (UniqueName: \"kubernetes.io/projected/ddb9444b-a866-41c9-af6d-831061243d3c-kube-api-access-q4x8v\") pod \"ddb9444b-a866-41c9-af6d-831061243d3c\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.814684 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-fernet-keys\") pod \"ddb9444b-a866-41c9-af6d-831061243d3c\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.814790 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-combined-ca-bundle\") pod \"ddb9444b-a866-41c9-af6d-831061243d3c\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.820423 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-scripts\") pod \"ddb9444b-a866-41c9-af6d-831061243d3c\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.820505 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-config-data\") pod \"ddb9444b-a866-41c9-af6d-831061243d3c\" (UID: \"ddb9444b-a866-41c9-af6d-831061243d3c\") " Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.820826 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5744d56-d51c-4529-9753-440276861091-logs\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.820938 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-config-data-custom\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.821667 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kslj\" (UniqueName: \"kubernetes.io/projected/f5744d56-d51c-4529-9753-440276861091-kube-api-access-4kslj\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.821790 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-combined-ca-bundle\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.821890 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-config-data\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.836996 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-config-data\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.839857 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-config-data-custom\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.840213 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5744d56-d51c-4529-9753-440276861091-logs\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.842448 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ddb9444b-a866-41c9-af6d-831061243d3c" (UID: "ddb9444b-a866-41c9-af6d-831061243d3c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.842873 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ddb9444b-a866-41c9-af6d-831061243d3c" (UID: "ddb9444b-a866-41c9-af6d-831061243d3c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.843013 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-scripts" (OuterVolumeSpecName: "scripts") pod "ddb9444b-a866-41c9-af6d-831061243d3c" (UID: "ddb9444b-a866-41c9-af6d-831061243d3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.855837 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86d4855669-sjtqj"] Nov 24 09:11:47 crc kubenswrapper[4719]: E1124 09:11:47.856325 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb9444b-a866-41c9-af6d-831061243d3c" containerName="keystone-bootstrap" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.856345 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb9444b-a866-41c9-af6d-831061243d3c" containerName="keystone-bootstrap" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.856588 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb9444b-a866-41c9-af6d-831061243d3c" containerName="keystone-bootstrap" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.858025 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb9444b-a866-41c9-af6d-831061243d3c-kube-api-access-q4x8v" (OuterVolumeSpecName: "kube-api-access-q4x8v") pod "ddb9444b-a866-41c9-af6d-831061243d3c" (UID: "ddb9444b-a866-41c9-af6d-831061243d3c"). InnerVolumeSpecName "kube-api-access-q4x8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.858111 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.859665 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.859881 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.859902 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-combined-ca-bundle\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.875876 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddb9444b-a866-41c9-af6d-831061243d3c" (UID: "ddb9444b-a866-41c9-af6d-831061243d3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.877868 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kslj\" (UniqueName: \"kubernetes.io/projected/f5744d56-d51c-4529-9753-440276861091-kube-api-access-4kslj\") pod \"barbican-api-785454dbb-gxnhx\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.900356 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-config-data" (OuterVolumeSpecName: "config-data") pod "ddb9444b-a866-41c9-af6d-831061243d3c" (UID: "ddb9444b-a866-41c9-af6d-831061243d3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.923024 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-combined-ca-bundle\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.923088 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-public-tls-certs\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.923140 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzjqw\" (UniqueName: \"kubernetes.io/projected/735cee72-40a1-4828-936f-9459f731b3da-kube-api-access-xzjqw\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.923172 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-internal-tls-certs\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.923196 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-ovndb-tls-certs\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.923250 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-httpd-config\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.923276 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-config\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.923341 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.923353 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.923364 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.923375 4719 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.923385 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4x8v\" (UniqueName: \"kubernetes.io/projected/ddb9444b-a866-41c9-af6d-831061243d3c-kube-api-access-q4x8v\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.923393 4719 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddb9444b-a866-41c9-af6d-831061243d3c-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:47 crc kubenswrapper[4719]: I1124 09:11:47.941756 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86d4855669-sjtqj"] Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.024673 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzjqw\" (UniqueName: \"kubernetes.io/projected/735cee72-40a1-4828-936f-9459f731b3da-kube-api-access-xzjqw\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.024737 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-internal-tls-certs\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.024825 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-ovndb-tls-certs\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.025518 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-httpd-config\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.025562 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-config\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.025658 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-combined-ca-bundle\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.025684 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-public-tls-certs\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.035722 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-httpd-config\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.045445 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.051437 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-combined-ca-bundle\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.051623 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-internal-tls-certs\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.069758 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-ovndb-tls-certs\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.076571 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-config\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.080561 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzjqw\" (UniqueName: \"kubernetes.io/projected/735cee72-40a1-4828-936f-9459f731b3da-kube-api-access-xzjqw\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.083648 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/735cee72-40a1-4828-936f-9459f731b3da-public-tls-certs\") pod \"neutron-86d4855669-sjtqj\" (UID: \"735cee72-40a1-4828-936f-9459f731b3da\") " pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.088669 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.105626 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" podUID="8ee1a0f0-a05b-42b8-aa93-af2c12c699b5" containerName="dnsmasq-dns" containerID="cri-o://ababbf549de8f1d27259d78ccc40ba823648c89650f2b3dff04a6bc5dc737cf1" gracePeriod=10 Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.106104 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jcgws" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.106286 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jcgws" event={"ID":"ddb9444b-a866-41c9-af6d-831061243d3c","Type":"ContainerDied","Data":"6421ecaa655775e27d3f9523f7bb3249e869679e2e8c51a77277b9ea35b8ec41"} Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.106400 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6421ecaa655775e27d3f9523f7bb3249e869679e2e8c51a77277b9ea35b8ec41" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.106505 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.106944 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5478d99856-2md7b" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.196862 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c66bd98b8-qwf7d"] Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.208498 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.210508 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.211349 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.212334 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.212466 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.212599 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.212707 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.212848 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-d4gqc" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.231806 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c66bd98b8-qwf7d"] Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.307463 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-68fd59f556-bvd2x"] Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.331394 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-credential-keys\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.331433 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-config-data\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.331474 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwssg\" (UniqueName: \"kubernetes.io/projected/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-kube-api-access-vwssg\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.331496 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-internal-tls-certs\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.331586 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-combined-ca-bundle\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.331604 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-scripts\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.331622 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-public-tls-certs\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.331647 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-fernet-keys\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.433251 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-config-data\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.433329 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwssg\" (UniqueName: \"kubernetes.io/projected/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-kube-api-access-vwssg\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.433372 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-internal-tls-certs\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.433490 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-combined-ca-bundle\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.433511 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-scripts\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.433531 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-public-tls-certs\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.433576 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-fernet-keys\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.433608 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-credential-keys\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.438747 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-public-tls-certs\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.439187 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-combined-ca-bundle\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.439348 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-fernet-keys\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.440025 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-config-data\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.441613 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-scripts\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.449861 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwssg\" (UniqueName: \"kubernetes.io/projected/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-kube-api-access-vwssg\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.456636 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-credential-keys\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.459389 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bfe0fc6-5440-468a-9ad6-6f9f6171e639-internal-tls-certs\") pod \"keystone-c66bd98b8-qwf7d\" (UID: \"4bfe0fc6-5440-468a-9ad6-6f9f6171e639\") " pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.550425 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55fc6d8c7-9576d"] Nov 24 09:11:48 crc kubenswrapper[4719]: I1124 09:11:48.557408 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:49 crc kubenswrapper[4719]: I1124 09:11:49.125590 4719 generic.go:334] "Generic (PLEG): container finished" podID="8ee1a0f0-a05b-42b8-aa93-af2c12c699b5" containerID="ababbf549de8f1d27259d78ccc40ba823648c89650f2b3dff04a6bc5dc737cf1" exitCode=0 Nov 24 09:11:49 crc kubenswrapper[4719]: I1124 09:11:49.125755 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" event={"ID":"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5","Type":"ContainerDied","Data":"ababbf549de8f1d27259d78ccc40ba823648c89650f2b3dff04a6bc5dc737cf1"} Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.590741 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-c84b4b586-mwtc8"] Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.592664 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.601088 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.601158 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.611626 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c84b4b586-mwtc8"] Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.713455 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcrl8\" (UniqueName: \"kubernetes.io/projected/390c94ff-225b-448b-963d-9b8cb729963a-kube-api-access-hcrl8\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.713526 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-config-data-custom\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.713557 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-combined-ca-bundle\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.713598 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-internal-tls-certs\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.713616 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/390c94ff-225b-448b-963d-9b8cb729963a-logs\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.713740 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-public-tls-certs\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.713782 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-config-data\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.815363 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-internal-tls-certs\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.815401 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/390c94ff-225b-448b-963d-9b8cb729963a-logs\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.815441 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-public-tls-certs\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.815484 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-config-data\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.815513 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcrl8\" (UniqueName: \"kubernetes.io/projected/390c94ff-225b-448b-963d-9b8cb729963a-kube-api-access-hcrl8\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.815553 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-config-data-custom\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.815583 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-combined-ca-bundle\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.816353 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/390c94ff-225b-448b-963d-9b8cb729963a-logs\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.821287 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-internal-tls-certs\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.821919 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-combined-ca-bundle\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.825343 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-config-data\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.825938 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-config-data-custom\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.834244 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/390c94ff-225b-448b-963d-9b8cb729963a-public-tls-certs\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.838995 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcrl8\" (UniqueName: \"kubernetes.io/projected/390c94ff-225b-448b-963d-9b8cb729963a-kube-api-access-hcrl8\") pod \"barbican-api-c84b4b586-mwtc8\" (UID: \"390c94ff-225b-448b-963d-9b8cb729963a\") " pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:50 crc kubenswrapper[4719]: I1124 09:11:50.913515 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:52 crc kubenswrapper[4719]: I1124 09:11:52.714158 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:52 crc kubenswrapper[4719]: I1124 09:11:52.863852 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-dns-svc\") pod \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " Nov 24 09:11:52 crc kubenswrapper[4719]: I1124 09:11:52.863911 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-ovsdbserver-nb\") pod \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " Nov 24 09:11:52 crc kubenswrapper[4719]: I1124 09:11:52.863995 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-config\") pod \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " Nov 24 09:11:52 crc kubenswrapper[4719]: I1124 09:11:52.864012 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw2b9\" (UniqueName: \"kubernetes.io/projected/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-kube-api-access-kw2b9\") pod \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " Nov 24 09:11:52 crc kubenswrapper[4719]: I1124 09:11:52.864068 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-ovsdbserver-sb\") pod \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\" (UID: \"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5\") " Nov 24 09:11:52 crc kubenswrapper[4719]: I1124 09:11:52.877773 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-kube-api-access-kw2b9" (OuterVolumeSpecName: "kube-api-access-kw2b9") pod "8ee1a0f0-a05b-42b8-aa93-af2c12c699b5" (UID: "8ee1a0f0-a05b-42b8-aa93-af2c12c699b5"). InnerVolumeSpecName "kube-api-access-kw2b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:11:52 crc kubenswrapper[4719]: I1124 09:11:52.949798 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ee1a0f0-a05b-42b8-aa93-af2c12c699b5" (UID: "8ee1a0f0-a05b-42b8-aa93-af2c12c699b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:52 crc kubenswrapper[4719]: I1124 09:11:52.966168 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:52 crc kubenswrapper[4719]: I1124 09:11:52.966196 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw2b9\" (UniqueName: \"kubernetes.io/projected/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-kube-api-access-kw2b9\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:52 crc kubenswrapper[4719]: I1124 09:11:52.973994 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ee1a0f0-a05b-42b8-aa93-af2c12c699b5" (UID: "8ee1a0f0-a05b-42b8-aa93-af2c12c699b5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.002502 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ee1a0f0-a05b-42b8-aa93-af2c12c699b5" (UID: "8ee1a0f0-a05b-42b8-aa93-af2c12c699b5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.004305 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-config" (OuterVolumeSpecName: "config") pod "8ee1a0f0-a05b-42b8-aa93-af2c12c699b5" (UID: "8ee1a0f0-a05b-42b8-aa93-af2c12c699b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.069408 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.069453 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.069463 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.094914 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-lqxms"] Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.174051 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-lqxms" event={"ID":"f27e073e-ba9a-47c7-858a-b2a7a28e867f","Type":"ContainerStarted","Data":"fc525d3e0f169648ce7db61dc7c318d1ae2983caa20438ae45e53e361ca9a631"} Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.188883 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" event={"ID":"8ee1a0f0-a05b-42b8-aa93-af2c12c699b5","Type":"ContainerDied","Data":"15d1ef1829bc4a91fb1fc9cf2d8dac32ce9de4ee4edb5fd7aff507d741f88330"} Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.188923 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-fv2jx" Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.188941 4719 scope.go:117] "RemoveContainer" containerID="ababbf549de8f1d27259d78ccc40ba823648c89650f2b3dff04a6bc5dc737cf1" Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.194428 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55fc6d8c7-9576d" event={"ID":"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb","Type":"ContainerStarted","Data":"2e049147f941047cca0b4b45b168cbcd778e8048643d95015721e075d9f5cb8c"} Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.196818 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" event={"ID":"6feeb8da-45f5-4eb9-bae3-5101afc7e021","Type":"ContainerStarted","Data":"22bd4bfe087e5b5cf1117334910e8c5d698a7682a7b55210e1df6451bc7608dd"} Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.211551 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08","Type":"ContainerStarted","Data":"79816cd0c7fa1fcdc7a5b4bf7b446db2ffee5775627a009335c7adf3f0dc6fd1"} Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.225296 4719 scope.go:117] "RemoveContainer" containerID="24bbd421067463b8283b447939bc07d61aee18a2d6fefbfa0a7b7f37e0bf8eb3" Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.253785 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fv2jx"] Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.274024 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-fv2jx"] Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.328550 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c84b4b586-mwtc8"] Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.363616 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c66bd98b8-qwf7d"] Nov 24 09:11:53 crc kubenswrapper[4719]: W1124 09:11:53.376500 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod735cee72_40a1_4828_936f_9459f731b3da.slice/crio-66b3a0f87b4dd1da7b9a0ec786953ec4b3d16631ba686456464b4d889b2aac46 WatchSource:0}: Error finding container 66b3a0f87b4dd1da7b9a0ec786953ec4b3d16631ba686456464b4d889b2aac46: Status 404 returned error can't find the container with id 66b3a0f87b4dd1da7b9a0ec786953ec4b3d16631ba686456464b4d889b2aac46 Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.399901 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86d4855669-sjtqj"] Nov 24 09:11:53 crc kubenswrapper[4719]: I1124 09:11:53.503576 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-785454dbb-gxnhx"] Nov 24 09:11:53 crc kubenswrapper[4719]: W1124 09:11:53.534024 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5744d56_d51c_4529_9753_440276861091.slice/crio-99f0e4419ab9d3c30ac8f96fd9613ba281a852bb8ea19a09cad7938f1dbbb3b2 WatchSource:0}: Error finding container 99f0e4419ab9d3c30ac8f96fd9613ba281a852bb8ea19a09cad7938f1dbbb3b2: Status 404 returned error can't find the container with id 99f0e4419ab9d3c30ac8f96fd9613ba281a852bb8ea19a09cad7938f1dbbb3b2 Nov 24 09:11:54 crc kubenswrapper[4719]: I1124 09:11:54.228007 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d4855669-sjtqj" event={"ID":"735cee72-40a1-4828-936f-9459f731b3da","Type":"ContainerStarted","Data":"f51a635885fc899ca7f16aca789a12f195a800c81c00aaaf440e18779351208e"} Nov 24 09:11:54 crc kubenswrapper[4719]: I1124 09:11:54.228070 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d4855669-sjtqj" event={"ID":"735cee72-40a1-4828-936f-9459f731b3da","Type":"ContainerStarted","Data":"66b3a0f87b4dd1da7b9a0ec786953ec4b3d16631ba686456464b4d889b2aac46"} Nov 24 09:11:54 crc kubenswrapper[4719]: I1124 09:11:54.230882 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-785454dbb-gxnhx" event={"ID":"f5744d56-d51c-4529-9753-440276861091","Type":"ContainerStarted","Data":"8ae5c21060ce9a1f87692605cc13180ab0b86e40ca5566ec4cca63828a649eb6"} Nov 24 09:11:54 crc kubenswrapper[4719]: I1124 09:11:54.230918 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-785454dbb-gxnhx" event={"ID":"f5744d56-d51c-4529-9753-440276861091","Type":"ContainerStarted","Data":"99f0e4419ab9d3c30ac8f96fd9613ba281a852bb8ea19a09cad7938f1dbbb3b2"} Nov 24 09:11:54 crc kubenswrapper[4719]: I1124 09:11:54.234217 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c66bd98b8-qwf7d" event={"ID":"4bfe0fc6-5440-468a-9ad6-6f9f6171e639","Type":"ContainerStarted","Data":"f0a70080f450b25d50a585a95f029dff1188b5283861bffb28eacdec8c74fb10"} Nov 24 09:11:54 crc kubenswrapper[4719]: I1124 09:11:54.234261 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c66bd98b8-qwf7d" event={"ID":"4bfe0fc6-5440-468a-9ad6-6f9f6171e639","Type":"ContainerStarted","Data":"be2e9b9a0a3829c264d0ede0a4dbbd391f8bfeb484bee08cec246a0e1faf044d"} Nov 24 09:11:54 crc kubenswrapper[4719]: I1124 09:11:54.235265 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:11:54 crc kubenswrapper[4719]: I1124 09:11:54.240584 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c84b4b586-mwtc8" event={"ID":"390c94ff-225b-448b-963d-9b8cb729963a","Type":"ContainerStarted","Data":"c322e4b65de7fa75d2a23f4b15e83f235e52c6a25f4600d45da0559cebe33c38"} Nov 24 09:11:54 crc kubenswrapper[4719]: I1124 09:11:54.240631 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c84b4b586-mwtc8" event={"ID":"390c94ff-225b-448b-963d-9b8cb729963a","Type":"ContainerStarted","Data":"a95fa3d4df15b210f73288cf9df6bc81c1c24d8ef2f1f00ef49894fa30e470ed"} Nov 24 09:11:54 crc kubenswrapper[4719]: I1124 09:11:54.245342 4719 generic.go:334] "Generic (PLEG): container finished" podID="f27e073e-ba9a-47c7-858a-b2a7a28e867f" containerID="1d9895aef30816b2fbf722962918b10daa69dcf8204e4c1970550daf4c19f898" exitCode=0 Nov 24 09:11:54 crc kubenswrapper[4719]: I1124 09:11:54.245411 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-lqxms" event={"ID":"f27e073e-ba9a-47c7-858a-b2a7a28e867f","Type":"ContainerDied","Data":"1d9895aef30816b2fbf722962918b10daa69dcf8204e4c1970550daf4c19f898"} Nov 24 09:11:54 crc kubenswrapper[4719]: I1124 09:11:54.266543 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c66bd98b8-qwf7d" podStartSLOduration=6.266523665 podStartE2EDuration="6.266523665s" podCreationTimestamp="2025-11-24 09:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:54.262074946 +0000 UTC m=+1090.593348218" watchObservedRunningTime="2025-11-24 09:11:54.266523665 +0000 UTC m=+1090.597796917" Nov 24 09:11:54 crc kubenswrapper[4719]: I1124 09:11:54.535461 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ee1a0f0-a05b-42b8-aa93-af2c12c699b5" path="/var/lib/kubelet/pods/8ee1a0f0-a05b-42b8-aa93-af2c12c699b5/volumes" Nov 24 09:11:55 crc kubenswrapper[4719]: I1124 09:11:55.283261 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8bn65" event={"ID":"902a4567-228a-43e0-b6c4-c323c4366c94","Type":"ContainerStarted","Data":"fbe1391080b3ad2ec9d1385da1c125e20804d2e3a60b3e12d58aa350fc0bd326"} Nov 24 09:11:55 crc kubenswrapper[4719]: I1124 09:11:55.322505 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-8bn65" podStartSLOduration=3.907590121 podStartE2EDuration="39.322466051s" podCreationTimestamp="2025-11-24 09:11:16 +0000 UTC" firstStartedPulling="2025-11-24 09:11:18.052519846 +0000 UTC m=+1054.383793098" lastFinishedPulling="2025-11-24 09:11:53.467395786 +0000 UTC m=+1089.798669028" observedRunningTime="2025-11-24 09:11:55.304461129 +0000 UTC m=+1091.635734391" watchObservedRunningTime="2025-11-24 09:11:55.322466051 +0000 UTC m=+1091.653739303" Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.296904 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86d4855669-sjtqj" event={"ID":"735cee72-40a1-4828-936f-9459f731b3da","Type":"ContainerStarted","Data":"3002c1c5cd94f86496523bbe5aed5122b8bf749561991f9f52322a7d1b1ee5af"} Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.297760 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.314566 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-785454dbb-gxnhx" event={"ID":"f5744d56-d51c-4529-9753-440276861091","Type":"ContainerStarted","Data":"75d2e85d957e51b222d1441debf2802a3339afc3c6193317bee9d7864dcdf17e"} Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.315489 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.315519 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.319049 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" event={"ID":"6feeb8da-45f5-4eb9-bae3-5101afc7e021","Type":"ContainerStarted","Data":"87d24912dae9413d06df33c6076cb93ed72035cf9ec24d58dba235387dacf4b4"} Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.320853 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c84b4b586-mwtc8" event={"ID":"390c94ff-225b-448b-963d-9b8cb729963a","Type":"ContainerStarted","Data":"dc9e05ffc7a031174d2cf02c0846ad2bdb95f032f2c0507351aa742965518277"} Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.322223 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.322258 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.331746 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86d4855669-sjtqj" podStartSLOduration=9.331724653 podStartE2EDuration="9.331724653s" podCreationTimestamp="2025-11-24 09:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:56.32265239 +0000 UTC m=+1092.653925662" watchObservedRunningTime="2025-11-24 09:11:56.331724653 +0000 UTC m=+1092.662997905" Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.335979 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-lqxms" event={"ID":"f27e073e-ba9a-47c7-858a-b2a7a28e867f","Type":"ContainerStarted","Data":"4fc58988e9a82260a697b8c776a9f81e6f5dfc15e76cc02247147df150927afc"} Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.336192 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.336987 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55fc6d8c7-9576d" event={"ID":"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb","Type":"ContainerStarted","Data":"4d99bfee02bb8ff0dd16d7be548bc14ba77535582c64e7e2c66bf868854138ff"} Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.370925 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-785454dbb-gxnhx" podStartSLOduration=9.370905219 podStartE2EDuration="9.370905219s" podCreationTimestamp="2025-11-24 09:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:56.356459811 +0000 UTC m=+1092.687733063" watchObservedRunningTime="2025-11-24 09:11:56.370905219 +0000 UTC m=+1092.702178471" Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.403586 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-869f779d85-lqxms" podStartSLOduration=9.403569587 podStartE2EDuration="9.403569587s" podCreationTimestamp="2025-11-24 09:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:56.400087746 +0000 UTC m=+1092.731361008" watchObservedRunningTime="2025-11-24 09:11:56.403569587 +0000 UTC m=+1092.734842839" Nov 24 09:11:56 crc kubenswrapper[4719]: I1124 09:11:56.408469 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-c84b4b586-mwtc8" podStartSLOduration=6.408451308 podStartE2EDuration="6.408451308s" podCreationTimestamp="2025-11-24 09:11:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:11:56.381668772 +0000 UTC m=+1092.712942034" watchObservedRunningTime="2025-11-24 09:11:56.408451308 +0000 UTC m=+1092.739724560" Nov 24 09:11:57 crc kubenswrapper[4719]: I1124 09:11:57.351311 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" event={"ID":"6feeb8da-45f5-4eb9-bae3-5101afc7e021","Type":"ContainerStarted","Data":"0612936b542a9301b11ea1784a539dedd3ef38e3ad7658ba447da3255c9f6d89"} Nov 24 09:11:57 crc kubenswrapper[4719]: I1124 09:11:57.357666 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55fc6d8c7-9576d" event={"ID":"9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb","Type":"ContainerStarted","Data":"3d904e406aed960f50db325e7c70b784c53569f5503f755ed4ff6b5696c760ba"} Nov 24 09:11:57 crc kubenswrapper[4719]: I1124 09:11:57.371702 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-68fd59f556-bvd2x" podStartSLOduration=7.839080197 podStartE2EDuration="10.371682146s" podCreationTimestamp="2025-11-24 09:11:47 +0000 UTC" firstStartedPulling="2025-11-24 09:11:52.356586699 +0000 UTC m=+1088.687859951" lastFinishedPulling="2025-11-24 09:11:54.889188648 +0000 UTC m=+1091.220461900" observedRunningTime="2025-11-24 09:11:57.369430471 +0000 UTC m=+1093.700703743" watchObservedRunningTime="2025-11-24 09:11:57.371682146 +0000 UTC m=+1093.702955398" Nov 24 09:11:57 crc kubenswrapper[4719]: I1124 09:11:57.395290 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-55fc6d8c7-9576d" podStartSLOduration=7.883171925 podStartE2EDuration="10.39527086s" podCreationTimestamp="2025-11-24 09:11:47 +0000 UTC" firstStartedPulling="2025-11-24 09:11:52.377003861 +0000 UTC m=+1088.708277113" lastFinishedPulling="2025-11-24 09:11:54.889102796 +0000 UTC m=+1091.220376048" observedRunningTime="2025-11-24 09:11:57.393724315 +0000 UTC m=+1093.724997567" watchObservedRunningTime="2025-11-24 09:11:57.39527086 +0000 UTC m=+1093.726544142" Nov 24 09:12:01 crc kubenswrapper[4719]: I1124 09:12:01.183852 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:12:01 crc kubenswrapper[4719]: I1124 09:12:01.244724 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-785454dbb-gxnhx" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 09:12:02 crc kubenswrapper[4719]: I1124 09:12:02.499641 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c84b4b586-mwtc8" Nov 24 09:12:02 crc kubenswrapper[4719]: I1124 09:12:02.575061 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-785454dbb-gxnhx"] Nov 24 09:12:02 crc kubenswrapper[4719]: I1124 09:12:02.575297 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-785454dbb-gxnhx" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api-log" containerID="cri-o://8ae5c21060ce9a1f87692605cc13180ab0b86e40ca5566ec4cca63828a649eb6" gracePeriod=30 Nov 24 09:12:02 crc kubenswrapper[4719]: I1124 09:12:02.575406 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-785454dbb-gxnhx" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api" containerID="cri-o://75d2e85d957e51b222d1441debf2802a3339afc3c6193317bee9d7864dcdf17e" gracePeriod=30 Nov 24 09:12:02 crc kubenswrapper[4719]: I1124 09:12:02.586335 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-785454dbb-gxnhx" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.146:9311/healthcheck\": EOF" Nov 24 09:12:02 crc kubenswrapper[4719]: I1124 09:12:02.586615 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-785454dbb-gxnhx" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.146:9311/healthcheck\": EOF" Nov 24 09:12:03 crc kubenswrapper[4719]: I1124 09:12:03.048234 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:12:03 crc kubenswrapper[4719]: I1124 09:12:03.133126 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp"] Nov 24 09:12:03 crc kubenswrapper[4719]: I1124 09:12:03.133794 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" podUID="db6461b8-f751-4248-a4fc-fe1b3b987706" containerName="dnsmasq-dns" containerID="cri-o://69eb6bd56f637c44c472fcb1d0df869698d9bc78869a53d3e67b04cbfa723713" gracePeriod=10 Nov 24 09:12:03 crc kubenswrapper[4719]: I1124 09:12:03.291168 4719 generic.go:334] "Generic (PLEG): container finished" podID="db6461b8-f751-4248-a4fc-fe1b3b987706" containerID="69eb6bd56f637c44c472fcb1d0df869698d9bc78869a53d3e67b04cbfa723713" exitCode=0 Nov 24 09:12:03 crc kubenswrapper[4719]: I1124 09:12:03.291235 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" event={"ID":"db6461b8-f751-4248-a4fc-fe1b3b987706","Type":"ContainerDied","Data":"69eb6bd56f637c44c472fcb1d0df869698d9bc78869a53d3e67b04cbfa723713"} Nov 24 09:12:03 crc kubenswrapper[4719]: I1124 09:12:03.293983 4719 generic.go:334] "Generic (PLEG): container finished" podID="f5744d56-d51c-4529-9753-440276861091" containerID="8ae5c21060ce9a1f87692605cc13180ab0b86e40ca5566ec4cca63828a649eb6" exitCode=143 Nov 24 09:12:03 crc kubenswrapper[4719]: I1124 09:12:03.294016 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-785454dbb-gxnhx" event={"ID":"f5744d56-d51c-4529-9753-440276861091","Type":"ContainerDied","Data":"8ae5c21060ce9a1f87692605cc13180ab0b86e40ca5566ec4cca63828a649eb6"} Nov 24 09:12:04 crc kubenswrapper[4719]: I1124 09:12:04.314724 4719 generic.go:334] "Generic (PLEG): container finished" podID="902a4567-228a-43e0-b6c4-c323c4366c94" containerID="fbe1391080b3ad2ec9d1385da1c125e20804d2e3a60b3e12d58aa350fc0bd326" exitCode=0 Nov 24 09:12:04 crc kubenswrapper[4719]: I1124 09:12:04.314766 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8bn65" event={"ID":"902a4567-228a-43e0-b6c4-c323c4366c94","Type":"ContainerDied","Data":"fbe1391080b3ad2ec9d1385da1c125e20804d2e3a60b3e12d58aa350fc0bd326"} Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.506728 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.616455 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-dns-svc\") pod \"db6461b8-f751-4248-a4fc-fe1b3b987706\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.616862 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-ovsdbserver-nb\") pod \"db6461b8-f751-4248-a4fc-fe1b3b987706\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.616925 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-config\") pod \"db6461b8-f751-4248-a4fc-fe1b3b987706\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.617116 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9lk4\" (UniqueName: \"kubernetes.io/projected/db6461b8-f751-4248-a4fc-fe1b3b987706-kube-api-access-g9lk4\") pod \"db6461b8-f751-4248-a4fc-fe1b3b987706\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.617171 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-ovsdbserver-sb\") pod \"db6461b8-f751-4248-a4fc-fe1b3b987706\" (UID: \"db6461b8-f751-4248-a4fc-fe1b3b987706\") " Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.635418 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db6461b8-f751-4248-a4fc-fe1b3b987706-kube-api-access-g9lk4" (OuterVolumeSpecName: "kube-api-access-g9lk4") pod "db6461b8-f751-4248-a4fc-fe1b3b987706" (UID: "db6461b8-f751-4248-a4fc-fe1b3b987706"). InnerVolumeSpecName "kube-api-access-g9lk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.676930 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "db6461b8-f751-4248-a4fc-fe1b3b987706" (UID: "db6461b8-f751-4248-a4fc-fe1b3b987706"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.686514 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-config" (OuterVolumeSpecName: "config") pod "db6461b8-f751-4248-a4fc-fe1b3b987706" (UID: "db6461b8-f751-4248-a4fc-fe1b3b987706"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.719521 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9lk4\" (UniqueName: \"kubernetes.io/projected/db6461b8-f751-4248-a4fc-fe1b3b987706-kube-api-access-g9lk4\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.719749 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.719818 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.729253 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "db6461b8-f751-4248-a4fc-fe1b3b987706" (UID: "db6461b8-f751-4248-a4fc-fe1b3b987706"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.738304 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db6461b8-f751-4248-a4fc-fe1b3b987706" (UID: "db6461b8-f751-4248-a4fc-fe1b3b987706"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.821141 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:06 crc kubenswrapper[4719]: I1124 09:12:06.821168 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db6461b8-f751-4248-a4fc-fe1b3b987706-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.170726 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8bn65" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.227870 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/902a4567-228a-43e0-b6c4-c323c4366c94-etc-machine-id\") pod \"902a4567-228a-43e0-b6c4-c323c4366c94\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.227955 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s6sb\" (UniqueName: \"kubernetes.io/projected/902a4567-228a-43e0-b6c4-c323c4366c94-kube-api-access-2s6sb\") pod \"902a4567-228a-43e0-b6c4-c323c4366c94\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.228058 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-config-data\") pod \"902a4567-228a-43e0-b6c4-c323c4366c94\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.228119 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-combined-ca-bundle\") pod \"902a4567-228a-43e0-b6c4-c323c4366c94\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.228163 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-db-sync-config-data\") pod \"902a4567-228a-43e0-b6c4-c323c4366c94\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.228184 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-scripts\") pod \"902a4567-228a-43e0-b6c4-c323c4366c94\" (UID: \"902a4567-228a-43e0-b6c4-c323c4366c94\") " Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.232531 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/902a4567-228a-43e0-b6c4-c323c4366c94-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "902a4567-228a-43e0-b6c4-c323c4366c94" (UID: "902a4567-228a-43e0-b6c4-c323c4366c94"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.235307 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-scripts" (OuterVolumeSpecName: "scripts") pod "902a4567-228a-43e0-b6c4-c323c4366c94" (UID: "902a4567-228a-43e0-b6c4-c323c4366c94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.235356 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "902a4567-228a-43e0-b6c4-c323c4366c94" (UID: "902a4567-228a-43e0-b6c4-c323c4366c94"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.239309 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902a4567-228a-43e0-b6c4-c323c4366c94-kube-api-access-2s6sb" (OuterVolumeSpecName: "kube-api-access-2s6sb") pod "902a4567-228a-43e0-b6c4-c323c4366c94" (UID: "902a4567-228a-43e0-b6c4-c323c4366c94"). InnerVolumeSpecName "kube-api-access-2s6sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.265125 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "902a4567-228a-43e0-b6c4-c323c4366c94" (UID: "902a4567-228a-43e0-b6c4-c323c4366c94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.323265 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-config-data" (OuterVolumeSpecName: "config-data") pod "902a4567-228a-43e0-b6c4-c323c4366c94" (UID: "902a4567-228a-43e0-b6c4-c323c4366c94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.334511 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.334543 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.334557 4719 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.334582 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/902a4567-228a-43e0-b6c4-c323c4366c94-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.334597 4719 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/902a4567-228a-43e0-b6c4-c323c4366c94-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.334621 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s6sb\" (UniqueName: \"kubernetes.io/projected/902a4567-228a-43e0-b6c4-c323c4366c94-kube-api-access-2s6sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.345284 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" event={"ID":"db6461b8-f751-4248-a4fc-fe1b3b987706","Type":"ContainerDied","Data":"929f7d53033cdf45a8f0dbaa9d7128edf3832ed01a44805557c918c95f4d54ba"} Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.345344 4719 scope.go:117] "RemoveContainer" containerID="69eb6bd56f637c44c472fcb1d0df869698d9bc78869a53d3e67b04cbfa723713" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.345842 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.352926 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8bn65" event={"ID":"902a4567-228a-43e0-b6c4-c323c4366c94","Type":"ContainerDied","Data":"b521947da76ad5af6d94183a514103fd7676f5dab5e26d62fd82aa58fce16584"} Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.352965 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b521947da76ad5af6d94183a514103fd7676f5dab5e26d62fd82aa58fce16584" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.353306 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8bn65" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.366549 4719 scope.go:117] "RemoveContainer" containerID="f98f13014f042ea36032b4229c2d22cb6aed65e7031aa956729e88394a2dd9d0" Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.403813 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp"] Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.412080 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-6pxsp"] Nov 24 09:12:07 crc kubenswrapper[4719]: I1124 09:12:07.628258 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-785454dbb-gxnhx" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.146:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.131380 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-785454dbb-gxnhx" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.146:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.363439 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08","Type":"ContainerStarted","Data":"dfdd90d33537b224b3b15bf925b12d03e661079571ce2c443c1b268e3c70b355"} Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.363627 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="ceilometer-central-agent" containerID="cri-o://7f11e627c6a4276a05cd5af15840dc44bbaa607f65ce24c5e48be532c044e5f4" gracePeriod=30 Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.363958 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.364218 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="proxy-httpd" containerID="cri-o://dfdd90d33537b224b3b15bf925b12d03e661079571ce2c443c1b268e3c70b355" gracePeriod=30 Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.364271 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="sg-core" containerID="cri-o://79816cd0c7fa1fcdc7a5b4bf7b446db2ffee5775627a009335c7adf3f0dc6fd1" gracePeriod=30 Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.364303 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="ceilometer-notification-agent" containerID="cri-o://219c4c45c6d0f7689efd72214412a91e927b78c719d636cf8ee0ddc068a3b715" gracePeriod=30 Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.394856 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.564458202 podStartE2EDuration="51.394834738s" podCreationTimestamp="2025-11-24 09:11:17 +0000 UTC" firstStartedPulling="2025-11-24 09:11:18.445110128 +0000 UTC m=+1054.776383380" lastFinishedPulling="2025-11-24 09:12:07.275486664 +0000 UTC m=+1103.606759916" observedRunningTime="2025-11-24 09:12:08.385496977 +0000 UTC m=+1104.716770259" watchObservedRunningTime="2025-11-24 09:12:08.394834738 +0000 UTC m=+1104.726107990" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.533314 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db6461b8-f751-4248-a4fc-fe1b3b987706" path="/var/lib/kubelet/pods/db6461b8-f751-4248-a4fc-fe1b3b987706/volumes" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.629394 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:12:08 crc kubenswrapper[4719]: E1124 09:12:08.629735 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db6461b8-f751-4248-a4fc-fe1b3b987706" containerName="init" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.629746 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="db6461b8-f751-4248-a4fc-fe1b3b987706" containerName="init" Nov 24 09:12:08 crc kubenswrapper[4719]: E1124 09:12:08.629763 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db6461b8-f751-4248-a4fc-fe1b3b987706" containerName="dnsmasq-dns" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.629768 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="db6461b8-f751-4248-a4fc-fe1b3b987706" containerName="dnsmasq-dns" Nov 24 09:12:08 crc kubenswrapper[4719]: E1124 09:12:08.629777 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ee1a0f0-a05b-42b8-aa93-af2c12c699b5" containerName="init" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.629783 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ee1a0f0-a05b-42b8-aa93-af2c12c699b5" containerName="init" Nov 24 09:12:08 crc kubenswrapper[4719]: E1124 09:12:08.629800 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ee1a0f0-a05b-42b8-aa93-af2c12c699b5" containerName="dnsmasq-dns" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.629807 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ee1a0f0-a05b-42b8-aa93-af2c12c699b5" containerName="dnsmasq-dns" Nov 24 09:12:08 crc kubenswrapper[4719]: E1124 09:12:08.629821 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902a4567-228a-43e0-b6c4-c323c4366c94" containerName="cinder-db-sync" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.629828 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="902a4567-228a-43e0-b6c4-c323c4366c94" containerName="cinder-db-sync" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.629986 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="db6461b8-f751-4248-a4fc-fe1b3b987706" containerName="dnsmasq-dns" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.630002 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="902a4567-228a-43e0-b6c4-c323c4366c94" containerName="cinder-db-sync" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.630011 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ee1a0f0-a05b-42b8-aa93-af2c12c699b5" containerName="dnsmasq-dns" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.630797 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.644820 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.644887 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.645693 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-h75nh" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.646091 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.661881 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-config-data\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.662002 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.662024 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.662096 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e45bd01-4f7a-4d72-ab75-0358fb140a17-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.662135 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-scripts\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.662168 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvjsg\" (UniqueName: \"kubernetes.io/projected/0e45bd01-4f7a-4d72-ab75-0358fb140a17-kube-api-access-fvjsg\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.663158 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-zv6tm"] Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.664687 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.673109 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.713510 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-zv6tm"] Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.767622 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.767872 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-config-data\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.767963 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-config\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.768096 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.768170 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t7tl\" (UniqueName: \"kubernetes.io/projected/9df5868d-b22b-4226-831b-cf19140e059c-kube-api-access-7t7tl\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.768241 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.768338 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e45bd01-4f7a-4d72-ab75-0358fb140a17-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.768468 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-scripts\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.768561 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.768640 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvjsg\" (UniqueName: \"kubernetes.io/projected/0e45bd01-4f7a-4d72-ab75-0358fb140a17-kube-api-access-fvjsg\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.768715 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-dns-svc\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.769142 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e45bd01-4f7a-4d72-ab75-0358fb140a17-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.777440 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.777949 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-config-data\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.783537 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.784636 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-scripts\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.788651 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvjsg\" (UniqueName: \"kubernetes.io/projected/0e45bd01-4f7a-4d72-ab75-0358fb140a17-kube-api-access-fvjsg\") pod \"cinder-scheduler-0\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.849665 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.851214 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.857460 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.870232 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzcqj\" (UniqueName: \"kubernetes.io/projected/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-kube-api-access-hzcqj\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.870293 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t7tl\" (UniqueName: \"kubernetes.io/projected/9df5868d-b22b-4226-831b-cf19140e059c-kube-api-access-7t7tl\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.870320 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-scripts\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.870348 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-config-data-custom\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.870414 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.870517 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.870561 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-config-data\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.870695 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-dns-svc\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.870765 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.870806 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.870876 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-config\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.870963 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-logs\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.871426 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.872044 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-dns-svc\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.872274 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-config\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.872728 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.900882 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.920913 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t7tl\" (UniqueName: \"kubernetes.io/projected/9df5868d-b22b-4226-831b-cf19140e059c-kube-api-access-7t7tl\") pod \"dnsmasq-dns-58db5546cc-zv6tm\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.949379 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.972310 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-config-data\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.972366 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.972446 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-logs\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.972483 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzcqj\" (UniqueName: \"kubernetes.io/projected/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-kube-api-access-hzcqj\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.972511 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-scripts\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.972538 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-config-data-custom\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.972600 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.973269 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-logs\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.972532 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.976810 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-config-data-custom\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.977167 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-config-data\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.977976 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.978541 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-scripts\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.987996 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:08 crc kubenswrapper[4719]: I1124 09:12:08.999674 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzcqj\" (UniqueName: \"kubernetes.io/projected/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-kube-api-access-hzcqj\") pod \"cinder-api-0\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " pod="openstack/cinder-api-0" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.057270 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-785454dbb-gxnhx" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.146:9311/healthcheck\": read tcp 10.217.0.2:46672->10.217.0.146:9311: read: connection reset by peer" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.057277 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-785454dbb-gxnhx" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.146:9311/healthcheck\": read tcp 10.217.0.2:46668->10.217.0.146:9311: read: connection reset by peer" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.057999 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-785454dbb-gxnhx" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.146:9311/healthcheck\": dial tcp 10.217.0.146:9311: connect: connection refused" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.173232 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.384234 4719 generic.go:334] "Generic (PLEG): container finished" podID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerID="dfdd90d33537b224b3b15bf925b12d03e661079571ce2c443c1b268e3c70b355" exitCode=0 Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.384272 4719 generic.go:334] "Generic (PLEG): container finished" podID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerID="79816cd0c7fa1fcdc7a5b4bf7b446db2ffee5775627a009335c7adf3f0dc6fd1" exitCode=2 Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.384283 4719 generic.go:334] "Generic (PLEG): container finished" podID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerID="7f11e627c6a4276a05cd5af15840dc44bbaa607f65ce24c5e48be532c044e5f4" exitCode=0 Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.384311 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08","Type":"ContainerDied","Data":"dfdd90d33537b224b3b15bf925b12d03e661079571ce2c443c1b268e3c70b355"} Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.384355 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08","Type":"ContainerDied","Data":"79816cd0c7fa1fcdc7a5b4bf7b446db2ffee5775627a009335c7adf3f0dc6fd1"} Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.384370 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08","Type":"ContainerDied","Data":"7f11e627c6a4276a05cd5af15840dc44bbaa607f65ce24c5e48be532c044e5f4"} Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.386188 4719 generic.go:334] "Generic (PLEG): container finished" podID="f5744d56-d51c-4529-9753-440276861091" containerID="75d2e85d957e51b222d1441debf2802a3339afc3c6193317bee9d7864dcdf17e" exitCode=0 Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.386218 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-785454dbb-gxnhx" event={"ID":"f5744d56-d51c-4529-9753-440276861091","Type":"ContainerDied","Data":"75d2e85d957e51b222d1441debf2802a3339afc3c6193317bee9d7864dcdf17e"} Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.441126 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.610526 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.662480 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-zv6tm"] Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.789409 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-config-data\") pod \"f5744d56-d51c-4529-9753-440276861091\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.789474 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-combined-ca-bundle\") pod \"f5744d56-d51c-4529-9753-440276861091\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.789507 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5744d56-d51c-4529-9753-440276861091-logs\") pod \"f5744d56-d51c-4529-9753-440276861091\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.789559 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kslj\" (UniqueName: \"kubernetes.io/projected/f5744d56-d51c-4529-9753-440276861091-kube-api-access-4kslj\") pod \"f5744d56-d51c-4529-9753-440276861091\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.789686 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-config-data-custom\") pod \"f5744d56-d51c-4529-9753-440276861091\" (UID: \"f5744d56-d51c-4529-9753-440276861091\") " Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.790550 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5744d56-d51c-4529-9753-440276861091-logs" (OuterVolumeSpecName: "logs") pod "f5744d56-d51c-4529-9753-440276861091" (UID: "f5744d56-d51c-4529-9753-440276861091"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.796063 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f5744d56-d51c-4529-9753-440276861091" (UID: "f5744d56-d51c-4529-9753-440276861091"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.799458 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5744d56-d51c-4529-9753-440276861091-kube-api-access-4kslj" (OuterVolumeSpecName: "kube-api-access-4kslj") pod "f5744d56-d51c-4529-9753-440276861091" (UID: "f5744d56-d51c-4529-9753-440276861091"). InnerVolumeSpecName "kube-api-access-4kslj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.828318 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5744d56-d51c-4529-9753-440276861091" (UID: "f5744d56-d51c-4529-9753-440276861091"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.856631 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-config-data" (OuterVolumeSpecName: "config-data") pod "f5744d56-d51c-4529-9753-440276861091" (UID: "f5744d56-d51c-4529-9753-440276861091"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.892766 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.892802 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.892831 4719 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5744d56-d51c-4529-9753-440276861091-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.892844 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kslj\" (UniqueName: \"kubernetes.io/projected/f5744d56-d51c-4529-9753-440276861091-kube-api-access-4kslj\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.892856 4719 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5744d56-d51c-4529-9753-440276861091-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:09 crc kubenswrapper[4719]: I1124 09:12:09.921683 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:12:10 crc kubenswrapper[4719]: I1124 09:12:10.411474 4719 generic.go:334] "Generic (PLEG): container finished" podID="9df5868d-b22b-4226-831b-cf19140e059c" containerID="f8ff30b58a642f94a8bf6253f175a86f11e7adac55e914fe8172cc17fb7ab59b" exitCode=0 Nov 24 09:12:10 crc kubenswrapper[4719]: I1124 09:12:10.412014 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" event={"ID":"9df5868d-b22b-4226-831b-cf19140e059c","Type":"ContainerDied","Data":"f8ff30b58a642f94a8bf6253f175a86f11e7adac55e914fe8172cc17fb7ab59b"} Nov 24 09:12:10 crc kubenswrapper[4719]: I1124 09:12:10.412074 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" event={"ID":"9df5868d-b22b-4226-831b-cf19140e059c","Type":"ContainerStarted","Data":"f7066d20f40b6e7f2b73efe118db988712cd62c1ddcf6fae9316c4630b8ec9f9"} Nov 24 09:12:10 crc kubenswrapper[4719]: I1124 09:12:10.418119 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e45bd01-4f7a-4d72-ab75-0358fb140a17","Type":"ContainerStarted","Data":"3b177a4afb3f065aa20006759aad924bcd8dad5ad951cfc96ae3672d44a1a623"} Nov 24 09:12:10 crc kubenswrapper[4719]: I1124 09:12:10.423749 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-785454dbb-gxnhx" event={"ID":"f5744d56-d51c-4529-9753-440276861091","Type":"ContainerDied","Data":"99f0e4419ab9d3c30ac8f96fd9613ba281a852bb8ea19a09cad7938f1dbbb3b2"} Nov 24 09:12:10 crc kubenswrapper[4719]: I1124 09:12:10.423821 4719 scope.go:117] "RemoveContainer" containerID="75d2e85d957e51b222d1441debf2802a3339afc3c6193317bee9d7864dcdf17e" Nov 24 09:12:10 crc kubenswrapper[4719]: I1124 09:12:10.423992 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-785454dbb-gxnhx" Nov 24 09:12:10 crc kubenswrapper[4719]: I1124 09:12:10.445897 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d","Type":"ContainerStarted","Data":"11e8ea43bca665c42ccf802d9c464af7dd6261a538f9384c3c5dfcf05d653d98"} Nov 24 09:12:10 crc kubenswrapper[4719]: I1124 09:12:10.463884 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-785454dbb-gxnhx"] Nov 24 09:12:10 crc kubenswrapper[4719]: I1124 09:12:10.481791 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-785454dbb-gxnhx"] Nov 24 09:12:10 crc kubenswrapper[4719]: I1124 09:12:10.484770 4719 scope.go:117] "RemoveContainer" containerID="8ae5c21060ce9a1f87692605cc13180ab0b86e40ca5566ec4cca63828a649eb6" Nov 24 09:12:10 crc kubenswrapper[4719]: I1124 09:12:10.539166 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5744d56-d51c-4529-9753-440276861091" path="/var/lib/kubelet/pods/f5744d56-d51c-4529-9753-440276861091/volumes" Nov 24 09:12:10 crc kubenswrapper[4719]: I1124 09:12:10.973528 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:12:11 crc kubenswrapper[4719]: I1124 09:12:11.465243 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d","Type":"ContainerStarted","Data":"e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033"} Nov 24 09:12:11 crc kubenswrapper[4719]: I1124 09:12:11.469864 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" event={"ID":"9df5868d-b22b-4226-831b-cf19140e059c","Type":"ContainerStarted","Data":"9059a7e5933968d7b5409caf693ec3d3e5d789a1ab080816e98c45ea25e0807d"} Nov 24 09:12:11 crc kubenswrapper[4719]: I1124 09:12:11.471666 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:11 crc kubenswrapper[4719]: I1124 09:12:11.477724 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e45bd01-4f7a-4d72-ab75-0358fb140a17","Type":"ContainerStarted","Data":"8f5c4bdb9a44e663f987d4edcdcc984c4684d295f9b91e84267cd5c6e714c8a3"} Nov 24 09:12:11 crc kubenswrapper[4719]: I1124 09:12:11.505797 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" podStartSLOduration=3.505774546 podStartE2EDuration="3.505774546s" podCreationTimestamp="2025-11-24 09:12:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:12:11.497891908 +0000 UTC m=+1107.829165180" watchObservedRunningTime="2025-11-24 09:12:11.505774546 +0000 UTC m=+1107.837047798" Nov 24 09:12:12 crc kubenswrapper[4719]: I1124 09:12:12.507429 4719 generic.go:334] "Generic (PLEG): container finished" podID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerID="219c4c45c6d0f7689efd72214412a91e927b78c719d636cf8ee0ddc068a3b715" exitCode=0 Nov 24 09:12:12 crc kubenswrapper[4719]: I1124 09:12:12.507524 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08","Type":"ContainerDied","Data":"219c4c45c6d0f7689efd72214412a91e927b78c719d636cf8ee0ddc068a3b715"} Nov 24 09:12:12 crc kubenswrapper[4719]: I1124 09:12:12.509175 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d","Type":"ContainerStarted","Data":"b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b"} Nov 24 09:12:12 crc kubenswrapper[4719]: I1124 09:12:12.509331 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" containerName="cinder-api-log" containerID="cri-o://e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033" gracePeriod=30 Nov 24 09:12:12 crc kubenswrapper[4719]: I1124 09:12:12.509398 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" containerName="cinder-api" containerID="cri-o://b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b" gracePeriod=30 Nov 24 09:12:12 crc kubenswrapper[4719]: I1124 09:12:12.509506 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 09:12:12 crc kubenswrapper[4719]: I1124 09:12:12.516651 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e45bd01-4f7a-4d72-ab75-0358fb140a17","Type":"ContainerStarted","Data":"1e0b81913b9729412fad464bb1146bff5786bdd0afe5dd1bfe9d4eec501bfceb"} Nov 24 09:12:12 crc kubenswrapper[4719]: I1124 09:12:12.532445 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.532419652 podStartE2EDuration="4.532419652s" podCreationTimestamp="2025-11-24 09:12:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:12:12.526880291 +0000 UTC m=+1108.858153543" watchObservedRunningTime="2025-11-24 09:12:12.532419652 +0000 UTC m=+1108.863692904" Nov 24 09:12:12 crc kubenswrapper[4719]: I1124 09:12:12.570840 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.664010493 podStartE2EDuration="4.570814925s" podCreationTimestamp="2025-11-24 09:12:08 +0000 UTC" firstStartedPulling="2025-11-24 09:12:09.457030985 +0000 UTC m=+1105.788304237" lastFinishedPulling="2025-11-24 09:12:10.363835417 +0000 UTC m=+1106.695108669" observedRunningTime="2025-11-24 09:12:12.555720247 +0000 UTC m=+1108.886993499" watchObservedRunningTime="2025-11-24 09:12:12.570814925 +0000 UTC m=+1108.902088187" Nov 24 09:12:12 crc kubenswrapper[4719]: I1124 09:12:12.894778 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.058323 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-combined-ca-bundle\") pod \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.058408 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-config-data\") pod \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.058442 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-sg-core-conf-yaml\") pod \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.058467 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-log-httpd\") pod \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.058491 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzh6h\" (UniqueName: \"kubernetes.io/projected/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-kube-api-access-nzh6h\") pod \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.058533 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-run-httpd\") pod \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.058569 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-scripts\") pod \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\" (UID: \"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08\") " Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.060389 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" (UID: "b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.060672 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" (UID: "b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.066655 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-scripts" (OuterVolumeSpecName: "scripts") pod "b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" (UID: "b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.088735 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-kube-api-access-nzh6h" (OuterVolumeSpecName: "kube-api-access-nzh6h") pod "b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" (UID: "b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08"). InnerVolumeSpecName "kube-api-access-nzh6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.112409 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" (UID: "b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.157774 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" (UID: "b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.160561 4719 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.160583 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzh6h\" (UniqueName: \"kubernetes.io/projected/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-kube-api-access-nzh6h\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.160592 4719 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.160601 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.160609 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.160617 4719 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.170190 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.206978 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-config-data" (OuterVolumeSpecName: "config-data") pod "b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" (UID: "b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.261671 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-etc-machine-id\") pod \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.261779 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-logs\") pod \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.261803 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" (UID: "e3f58046-d0d6-4276-9ed7-aa6c44e44f2d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.261827 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-combined-ca-bundle\") pod \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.261913 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-config-data-custom\") pod \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.261970 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzcqj\" (UniqueName: \"kubernetes.io/projected/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-kube-api-access-hzcqj\") pod \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.262020 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-config-data\") pod \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.262159 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-scripts\") pod \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\" (UID: \"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d\") " Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.262313 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-logs" (OuterVolumeSpecName: "logs") pod "e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" (UID: "e3f58046-d0d6-4276-9ed7-aa6c44e44f2d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.262824 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.262849 4719 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.262861 4719 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.265690 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-scripts" (OuterVolumeSpecName: "scripts") pod "e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" (UID: "e3f58046-d0d6-4276-9ed7-aa6c44e44f2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.265738 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-kube-api-access-hzcqj" (OuterVolumeSpecName: "kube-api-access-hzcqj") pod "e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" (UID: "e3f58046-d0d6-4276-9ed7-aa6c44e44f2d"). InnerVolumeSpecName "kube-api-access-hzcqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.266273 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" (UID: "e3f58046-d0d6-4276-9ed7-aa6c44e44f2d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.291874 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" (UID: "e3f58046-d0d6-4276-9ed7-aa6c44e44f2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.308548 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-config-data" (OuterVolumeSpecName: "config-data") pod "e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" (UID: "e3f58046-d0d6-4276-9ed7-aa6c44e44f2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.364241 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.364311 4719 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.364339 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzcqj\" (UniqueName: \"kubernetes.io/projected/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-kube-api-access-hzcqj\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.364367 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.364389 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.529760 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.529837 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08","Type":"ContainerDied","Data":"4499f7b6619f9de917b0a571c2bac985339e61f6c48bfc4eef7c2d2b89e496c9"} Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.530131 4719 scope.go:117] "RemoveContainer" containerID="dfdd90d33537b224b3b15bf925b12d03e661079571ce2c443c1b268e3c70b355" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.537568 4719 generic.go:334] "Generic (PLEG): container finished" podID="e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" containerID="b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b" exitCode=0 Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.537665 4719 generic.go:334] "Generic (PLEG): container finished" podID="e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" containerID="e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033" exitCode=143 Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.538005 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d","Type":"ContainerDied","Data":"b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b"} Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.538088 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d","Type":"ContainerDied","Data":"e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033"} Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.538109 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e3f58046-d0d6-4276-9ed7-aa6c44e44f2d","Type":"ContainerDied","Data":"11e8ea43bca665c42ccf802d9c464af7dd6261a538f9384c3c5dfcf05d653d98"} Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.539281 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.609760 4719 scope.go:117] "RemoveContainer" containerID="79816cd0c7fa1fcdc7a5b4bf7b446db2ffee5775627a009335c7adf3f0dc6fd1" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.616853 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.624239 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.637277 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.640896 4719 scope.go:117] "RemoveContainer" containerID="219c4c45c6d0f7689efd72214412a91e927b78c719d636cf8ee0ddc068a3b715" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.650142 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.652257 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:12:13 crc kubenswrapper[4719]: E1124 09:12:13.653384 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.653467 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api" Nov 24 09:12:13 crc kubenswrapper[4719]: E1124 09:12:13.653527 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="ceilometer-notification-agent" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.653574 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="ceilometer-notification-agent" Nov 24 09:12:13 crc kubenswrapper[4719]: E1124 09:12:13.653626 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" containerName="cinder-api-log" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.653697 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" containerName="cinder-api-log" Nov 24 09:12:13 crc kubenswrapper[4719]: E1124 09:12:13.653759 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" containerName="cinder-api" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.653805 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" containerName="cinder-api" Nov 24 09:12:13 crc kubenswrapper[4719]: E1124 09:12:13.653863 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api-log" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.653910 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api-log" Nov 24 09:12:13 crc kubenswrapper[4719]: E1124 09:12:13.653966 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="proxy-httpd" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.654015 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="proxy-httpd" Nov 24 09:12:13 crc kubenswrapper[4719]: E1124 09:12:13.654855 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="ceilometer-central-agent" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.654915 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="ceilometer-central-agent" Nov 24 09:12:13 crc kubenswrapper[4719]: E1124 09:12:13.654984 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="sg-core" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.655045 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="sg-core" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.655384 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" containerName="cinder-api" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.655459 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="sg-core" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.655522 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="ceilometer-notification-agent" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.655581 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api-log" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.655631 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5744d56-d51c-4529-9753-440276861091" containerName="barbican-api" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.655684 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="ceilometer-central-agent" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.655732 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" containerName="proxy-httpd" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.655783 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" containerName="cinder-api-log" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.656970 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.659256 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.659606 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.659766 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.670013 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.680219 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.683088 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.685436 4719 scope.go:117] "RemoveContainer" containerID="7f11e627c6a4276a05cd5af15840dc44bbaa607f65ce24c5e48be532c044e5f4" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.688284 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.688476 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.704824 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.738452 4719 scope.go:117] "RemoveContainer" containerID="b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.774647 4719 scope.go:117] "RemoveContainer" containerID="e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.776564 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-scripts\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.776761 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee147176-e4d4-4f7c-a73b-aa861bc83f31-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.776937 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gxn2\" (UniqueName: \"kubernetes.io/projected/ee147176-e4d4-4f7c-a73b-aa861bc83f31-kube-api-access-9gxn2\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.777118 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-config-data\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.777289 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-scripts\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.777444 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-config-data\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.777700 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee147176-e4d4-4f7c-a73b-aa861bc83f31-logs\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.777824 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.777955 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvvv8\" (UniqueName: \"kubernetes.io/projected/37b1cb81-9588-4efb-8f22-a3e089ae4402-kube-api-access-hvvv8\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.778267 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37b1cb81-9588-4efb-8f22-a3e089ae4402-log-httpd\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.778417 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.778628 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.778920 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-config-data-custom\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.779083 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37b1cb81-9588-4efb-8f22-a3e089ae4402-run-httpd\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.779261 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.779459 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.793758 4719 scope.go:117] "RemoveContainer" containerID="b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b" Nov 24 09:12:13 crc kubenswrapper[4719]: E1124 09:12:13.794199 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b\": container with ID starting with b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b not found: ID does not exist" containerID="b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.794310 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b"} err="failed to get container status \"b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b\": rpc error: code = NotFound desc = could not find container \"b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b\": container with ID starting with b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b not found: ID does not exist" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.794414 4719 scope.go:117] "RemoveContainer" containerID="e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033" Nov 24 09:12:13 crc kubenswrapper[4719]: E1124 09:12:13.794801 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033\": container with ID starting with e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033 not found: ID does not exist" containerID="e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.794892 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033"} err="failed to get container status \"e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033\": rpc error: code = NotFound desc = could not find container \"e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033\": container with ID starting with e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033 not found: ID does not exist" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.794968 4719 scope.go:117] "RemoveContainer" containerID="b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.795508 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b"} err="failed to get container status \"b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b\": rpc error: code = NotFound desc = could not find container \"b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b\": container with ID starting with b8cb90963550d912454b6a20abbbefbe77f1ed6d843089e50b2b461dd3efc86b not found: ID does not exist" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.795603 4719 scope.go:117] "RemoveContainer" containerID="e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.795933 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033"} err="failed to get container status \"e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033\": rpc error: code = NotFound desc = could not find container \"e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033\": container with ID starting with e1410241aa1ae62c5b206bc56823b4ab2db745f131037a05d4d6b13d5681e033 not found: ID does not exist" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.826270 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.880672 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee147176-e4d4-4f7c-a73b-aa861bc83f31-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.880733 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gxn2\" (UniqueName: \"kubernetes.io/projected/ee147176-e4d4-4f7c-a73b-aa861bc83f31-kube-api-access-9gxn2\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.880762 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-config-data\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.880785 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-scripts\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.880818 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-config-data\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.880839 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee147176-e4d4-4f7c-a73b-aa861bc83f31-logs\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.880862 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.880883 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvvv8\" (UniqueName: \"kubernetes.io/projected/37b1cb81-9588-4efb-8f22-a3e089ae4402-kube-api-access-hvvv8\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.880941 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37b1cb81-9588-4efb-8f22-a3e089ae4402-log-httpd\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.880965 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.880996 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.881018 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-config-data-custom\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.881050 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37b1cb81-9588-4efb-8f22-a3e089ae4402-run-httpd\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.881087 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.881130 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.881150 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-scripts\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.882096 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ee147176-e4d4-4f7c-a73b-aa861bc83f31-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.888497 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-scripts\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.888772 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37b1cb81-9588-4efb-8f22-a3e089ae4402-run-httpd\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.889912 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-config-data\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.891727 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee147176-e4d4-4f7c-a73b-aa861bc83f31-logs\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.893383 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-config-data-custom\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.894482 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.896550 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37b1cb81-9588-4efb-8f22-a3e089ae4402-log-httpd\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.906351 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvvv8\" (UniqueName: \"kubernetes.io/projected/37b1cb81-9588-4efb-8f22-a3e089ae4402-kube-api-access-hvvv8\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.909406 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.910230 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gxn2\" (UniqueName: \"kubernetes.io/projected/ee147176-e4d4-4f7c-a73b-aa861bc83f31-kube-api-access-9gxn2\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.913011 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.913753 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.918210 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " pod="openstack/ceilometer-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.919184 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-config-data\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.924562 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee147176-e4d4-4f7c-a73b-aa861bc83f31-scripts\") pod \"cinder-api-0\" (UID: \"ee147176-e4d4-4f7c-a73b-aa861bc83f31\") " pod="openstack/cinder-api-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.950115 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 09:12:13 crc kubenswrapper[4719]: I1124 09:12:13.983924 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 09:12:14 crc kubenswrapper[4719]: I1124 09:12:14.012457 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:12:14 crc kubenswrapper[4719]: I1124 09:12:14.480102 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 09:12:14 crc kubenswrapper[4719]: I1124 09:12:14.552882 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08" path="/var/lib/kubelet/pods/b3f3938c-d8b6-4801-aeaa-c17cdcbf2d08/volumes" Nov 24 09:12:14 crc kubenswrapper[4719]: I1124 09:12:14.565224 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3f58046-d0d6-4276-9ed7-aa6c44e44f2d" path="/var/lib/kubelet/pods/e3f58046-d0d6-4276-9ed7-aa6c44e44f2d/volumes" Nov 24 09:12:14 crc kubenswrapper[4719]: I1124 09:12:14.565908 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ee147176-e4d4-4f7c-a73b-aa861bc83f31","Type":"ContainerStarted","Data":"75b2e2723e23ba73ca77bc35907f53d79f8688f96cdba60ae042664ec41cfb75"} Nov 24 09:12:14 crc kubenswrapper[4719]: I1124 09:12:14.602230 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:14 crc kubenswrapper[4719]: W1124 09:12:14.607814 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37b1cb81_9588_4efb_8f22_a3e089ae4402.slice/crio-3234e15b4bd7defe249ca824f05fc5e1271793ffa720bafdaf948bb92492fd0f WatchSource:0}: Error finding container 3234e15b4bd7defe249ca824f05fc5e1271793ffa720bafdaf948bb92492fd0f: Status 404 returned error can't find the container with id 3234e15b4bd7defe249ca824f05fc5e1271793ffa720bafdaf948bb92492fd0f Nov 24 09:12:15 crc kubenswrapper[4719]: I1124 09:12:15.562766 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37b1cb81-9588-4efb-8f22-a3e089ae4402","Type":"ContainerStarted","Data":"f358ef62736e916efc6cc1a1bfe60c8ace35c5740757ea64ae6afd0dcc910314"} Nov 24 09:12:15 crc kubenswrapper[4719]: I1124 09:12:15.563058 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37b1cb81-9588-4efb-8f22-a3e089ae4402","Type":"ContainerStarted","Data":"3234e15b4bd7defe249ca824f05fc5e1271793ffa720bafdaf948bb92492fd0f"} Nov 24 09:12:15 crc kubenswrapper[4719]: I1124 09:12:15.564800 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ee147176-e4d4-4f7c-a73b-aa861bc83f31","Type":"ContainerStarted","Data":"c1ec2ed0a3e974cc40c384c978a97bd0e70381690354d29ec437a77453fc55e0"} Nov 24 09:12:16 crc kubenswrapper[4719]: I1124 09:12:16.576068 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37b1cb81-9588-4efb-8f22-a3e089ae4402","Type":"ContainerStarted","Data":"a3447fd5b2b9278d1dad80e8aea787363e05b4a26cca9da4d0b1c159f332b1f7"} Nov 24 09:12:16 crc kubenswrapper[4719]: I1124 09:12:16.579065 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ee147176-e4d4-4f7c-a73b-aa861bc83f31","Type":"ContainerStarted","Data":"e5fcac965715687222c2c8f9ff8a060f4bce153357c0245211425a711ca35d3d"} Nov 24 09:12:16 crc kubenswrapper[4719]: I1124 09:12:16.580439 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 09:12:16 crc kubenswrapper[4719]: I1124 09:12:16.601497 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.601479929 podStartE2EDuration="3.601479929s" podCreationTimestamp="2025-11-24 09:12:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:12:16.598297317 +0000 UTC m=+1112.929570589" watchObservedRunningTime="2025-11-24 09:12:16.601479929 +0000 UTC m=+1112.932753181" Nov 24 09:12:17 crc kubenswrapper[4719]: I1124 09:12:17.385257 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5478d99856-2md7b" Nov 24 09:12:17 crc kubenswrapper[4719]: I1124 09:12:17.591978 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37b1cb81-9588-4efb-8f22-a3e089ae4402","Type":"ContainerStarted","Data":"352671d279562bea8ef636c2db78d03b05ce12558063ed60276840230cfffc51"} Nov 24 09:12:17 crc kubenswrapper[4719]: I1124 09:12:17.640866 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5478d99856-2md7b" Nov 24 09:12:18 crc kubenswrapper[4719]: I1124 09:12:18.239524 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-86d4855669-sjtqj" Nov 24 09:12:18 crc kubenswrapper[4719]: I1124 09:12:18.311494 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d4bdff97d-5nfdc"] Nov 24 09:12:18 crc kubenswrapper[4719]: I1124 09:12:18.312081 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d4bdff97d-5nfdc" podUID="09971473-24eb-4506-8257-8fe16cdc271a" containerName="neutron-httpd" containerID="cri-o://4169d4b379514e806fa639d8c20ae168a9d9730f32cfc29abc68fb61c4d50221" gracePeriod=30 Nov 24 09:12:18 crc kubenswrapper[4719]: I1124 09:12:18.311767 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d4bdff97d-5nfdc" podUID="09971473-24eb-4506-8257-8fe16cdc271a" containerName="neutron-api" containerID="cri-o://788c5d4a28562e5d24d9e87968405ccc4f9346c9a29586f3f2d680b77756ac1b" gracePeriod=30 Nov 24 09:12:18 crc kubenswrapper[4719]: I1124 09:12:18.605488 4719 generic.go:334] "Generic (PLEG): container finished" podID="09971473-24eb-4506-8257-8fe16cdc271a" containerID="4169d4b379514e806fa639d8c20ae168a9d9730f32cfc29abc68fb61c4d50221" exitCode=0 Nov 24 09:12:18 crc kubenswrapper[4719]: I1124 09:12:18.606338 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdff97d-5nfdc" event={"ID":"09971473-24eb-4506-8257-8fe16cdc271a","Type":"ContainerDied","Data":"4169d4b379514e806fa639d8c20ae168a9d9730f32cfc29abc68fb61c4d50221"} Nov 24 09:12:18 crc kubenswrapper[4719]: I1124 09:12:18.991219 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.094264 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-lqxms"] Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.094512 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-869f779d85-lqxms" podUID="f27e073e-ba9a-47c7-858a-b2a7a28e867f" containerName="dnsmasq-dns" containerID="cri-o://4fc58988e9a82260a697b8c776a9f81e6f5dfc15e76cc02247147df150927afc" gracePeriod=10 Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.289634 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.373743 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.603525 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.620310 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37b1cb81-9588-4efb-8f22-a3e089ae4402","Type":"ContainerStarted","Data":"a11289ed34807e11015c516f087cd0d42f8bb7922cca77c8c024a27db5d61c9c"} Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.621216 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.624220 4719 generic.go:334] "Generic (PLEG): container finished" podID="f27e073e-ba9a-47c7-858a-b2a7a28e867f" containerID="4fc58988e9a82260a697b8c776a9f81e6f5dfc15e76cc02247147df150927afc" exitCode=0 Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.624410 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0e45bd01-4f7a-4d72-ab75-0358fb140a17" containerName="cinder-scheduler" containerID="cri-o://8f5c4bdb9a44e663f987d4edcdcc984c4684d295f9b91e84267cd5c6e714c8a3" gracePeriod=30 Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.624756 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-lqxms" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.625099 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-lqxms" event={"ID":"f27e073e-ba9a-47c7-858a-b2a7a28e867f","Type":"ContainerDied","Data":"4fc58988e9a82260a697b8c776a9f81e6f5dfc15e76cc02247147df150927afc"} Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.625127 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-lqxms" event={"ID":"f27e073e-ba9a-47c7-858a-b2a7a28e867f","Type":"ContainerDied","Data":"fc525d3e0f169648ce7db61dc7c318d1ae2983caa20438ae45e53e361ca9a631"} Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.625144 4719 scope.go:117] "RemoveContainer" containerID="4fc58988e9a82260a697b8c776a9f81e6f5dfc15e76cc02247147df150927afc" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.625224 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0e45bd01-4f7a-4d72-ab75-0358fb140a17" containerName="probe" containerID="cri-o://1e0b81913b9729412fad464bb1146bff5786bdd0afe5dd1bfe9d4eec501bfceb" gracePeriod=30 Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.663163 4719 scope.go:117] "RemoveContainer" containerID="1d9895aef30816b2fbf722962918b10daa69dcf8204e4c1970550daf4c19f898" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.670533 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.338002145 podStartE2EDuration="6.670512171s" podCreationTimestamp="2025-11-24 09:12:13 +0000 UTC" firstStartedPulling="2025-11-24 09:12:14.610003409 +0000 UTC m=+1110.941276661" lastFinishedPulling="2025-11-24 09:12:18.942513435 +0000 UTC m=+1115.273786687" observedRunningTime="2025-11-24 09:12:19.666184766 +0000 UTC m=+1115.997458028" watchObservedRunningTime="2025-11-24 09:12:19.670512171 +0000 UTC m=+1116.001785433" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.686013 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsrxb\" (UniqueName: \"kubernetes.io/projected/f27e073e-ba9a-47c7-858a-b2a7a28e867f-kube-api-access-tsrxb\") pod \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.686118 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-dns-svc\") pod \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.686148 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-config\") pod \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.686188 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-ovsdbserver-sb\") pod \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.686221 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-ovsdbserver-nb\") pod \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\" (UID: \"f27e073e-ba9a-47c7-858a-b2a7a28e867f\") " Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.710186 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f27e073e-ba9a-47c7-858a-b2a7a28e867f-kube-api-access-tsrxb" (OuterVolumeSpecName: "kube-api-access-tsrxb") pod "f27e073e-ba9a-47c7-858a-b2a7a28e867f" (UID: "f27e073e-ba9a-47c7-858a-b2a7a28e867f"). InnerVolumeSpecName "kube-api-access-tsrxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.716786 4719 scope.go:117] "RemoveContainer" containerID="4fc58988e9a82260a697b8c776a9f81e6f5dfc15e76cc02247147df150927afc" Nov 24 09:12:19 crc kubenswrapper[4719]: E1124 09:12:19.721489 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc58988e9a82260a697b8c776a9f81e6f5dfc15e76cc02247147df150927afc\": container with ID starting with 4fc58988e9a82260a697b8c776a9f81e6f5dfc15e76cc02247147df150927afc not found: ID does not exist" containerID="4fc58988e9a82260a697b8c776a9f81e6f5dfc15e76cc02247147df150927afc" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.721524 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc58988e9a82260a697b8c776a9f81e6f5dfc15e76cc02247147df150927afc"} err="failed to get container status \"4fc58988e9a82260a697b8c776a9f81e6f5dfc15e76cc02247147df150927afc\": rpc error: code = NotFound desc = could not find container \"4fc58988e9a82260a697b8c776a9f81e6f5dfc15e76cc02247147df150927afc\": container with ID starting with 4fc58988e9a82260a697b8c776a9f81e6f5dfc15e76cc02247147df150927afc not found: ID does not exist" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.721547 4719 scope.go:117] "RemoveContainer" containerID="1d9895aef30816b2fbf722962918b10daa69dcf8204e4c1970550daf4c19f898" Nov 24 09:12:19 crc kubenswrapper[4719]: E1124 09:12:19.723291 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d9895aef30816b2fbf722962918b10daa69dcf8204e4c1970550daf4c19f898\": container with ID starting with 1d9895aef30816b2fbf722962918b10daa69dcf8204e4c1970550daf4c19f898 not found: ID does not exist" containerID="1d9895aef30816b2fbf722962918b10daa69dcf8204e4c1970550daf4c19f898" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.723315 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d9895aef30816b2fbf722962918b10daa69dcf8204e4c1970550daf4c19f898"} err="failed to get container status \"1d9895aef30816b2fbf722962918b10daa69dcf8204e4c1970550daf4c19f898\": rpc error: code = NotFound desc = could not find container \"1d9895aef30816b2fbf722962918b10daa69dcf8204e4c1970550daf4c19f898\": container with ID starting with 1d9895aef30816b2fbf722962918b10daa69dcf8204e4c1970550daf4c19f898 not found: ID does not exist" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.761711 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f27e073e-ba9a-47c7-858a-b2a7a28e867f" (UID: "f27e073e-ba9a-47c7-858a-b2a7a28e867f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.780305 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-config" (OuterVolumeSpecName: "config") pod "f27e073e-ba9a-47c7-858a-b2a7a28e867f" (UID: "f27e073e-ba9a-47c7-858a-b2a7a28e867f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.788386 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.788432 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsrxb\" (UniqueName: \"kubernetes.io/projected/f27e073e-ba9a-47c7-858a-b2a7a28e867f-kube-api-access-tsrxb\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.788444 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.802714 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f27e073e-ba9a-47c7-858a-b2a7a28e867f" (UID: "f27e073e-ba9a-47c7-858a-b2a7a28e867f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.803482 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f27e073e-ba9a-47c7-858a-b2a7a28e867f" (UID: "f27e073e-ba9a-47c7-858a-b2a7a28e867f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.890491 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.890790 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f27e073e-ba9a-47c7-858a-b2a7a28e867f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.960540 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-lqxms"] Nov 24 09:12:19 crc kubenswrapper[4719]: I1124 09:12:19.968957 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-lqxms"] Nov 24 09:12:20 crc kubenswrapper[4719]: I1124 09:12:20.533351 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f27e073e-ba9a-47c7-858a-b2a7a28e867f" path="/var/lib/kubelet/pods/f27e073e-ba9a-47c7-858a-b2a7a28e867f/volumes" Nov 24 09:12:20 crc kubenswrapper[4719]: I1124 09:12:20.635852 4719 generic.go:334] "Generic (PLEG): container finished" podID="0e45bd01-4f7a-4d72-ab75-0358fb140a17" containerID="1e0b81913b9729412fad464bb1146bff5786bdd0afe5dd1bfe9d4eec501bfceb" exitCode=0 Nov 24 09:12:20 crc kubenswrapper[4719]: I1124 09:12:20.636783 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e45bd01-4f7a-4d72-ab75-0358fb140a17","Type":"ContainerDied","Data":"1e0b81913b9729412fad464bb1146bff5786bdd0afe5dd1bfe9d4eec501bfceb"} Nov 24 09:12:21 crc kubenswrapper[4719]: I1124 09:12:21.590335 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-c66bd98b8-qwf7d" Nov 24 09:12:21 crc kubenswrapper[4719]: I1124 09:12:21.656414 4719 generic.go:334] "Generic (PLEG): container finished" podID="09971473-24eb-4506-8257-8fe16cdc271a" containerID="788c5d4a28562e5d24d9e87968405ccc4f9346c9a29586f3f2d680b77756ac1b" exitCode=0 Nov 24 09:12:21 crc kubenswrapper[4719]: I1124 09:12:21.657239 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdff97d-5nfdc" event={"ID":"09971473-24eb-4506-8257-8fe16cdc271a","Type":"ContainerDied","Data":"788c5d4a28562e5d24d9e87968405ccc4f9346c9a29586f3f2d680b77756ac1b"} Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.093371 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.128407 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-combined-ca-bundle\") pod \"09971473-24eb-4506-8257-8fe16cdc271a\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.128458 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-ovndb-tls-certs\") pod \"09971473-24eb-4506-8257-8fe16cdc271a\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.128586 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwgcv\" (UniqueName: \"kubernetes.io/projected/09971473-24eb-4506-8257-8fe16cdc271a-kube-api-access-gwgcv\") pod \"09971473-24eb-4506-8257-8fe16cdc271a\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.128732 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-config\") pod \"09971473-24eb-4506-8257-8fe16cdc271a\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.128773 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-httpd-config\") pod \"09971473-24eb-4506-8257-8fe16cdc271a\" (UID: \"09971473-24eb-4506-8257-8fe16cdc271a\") " Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.142746 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09971473-24eb-4506-8257-8fe16cdc271a-kube-api-access-gwgcv" (OuterVolumeSpecName: "kube-api-access-gwgcv") pod "09971473-24eb-4506-8257-8fe16cdc271a" (UID: "09971473-24eb-4506-8257-8fe16cdc271a"). InnerVolumeSpecName "kube-api-access-gwgcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.154219 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "09971473-24eb-4506-8257-8fe16cdc271a" (UID: "09971473-24eb-4506-8257-8fe16cdc271a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.223100 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09971473-24eb-4506-8257-8fe16cdc271a" (UID: "09971473-24eb-4506-8257-8fe16cdc271a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.232183 4719 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.232234 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.232247 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwgcv\" (UniqueName: \"kubernetes.io/projected/09971473-24eb-4506-8257-8fe16cdc271a-kube-api-access-gwgcv\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.234736 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-config" (OuterVolumeSpecName: "config") pod "09971473-24eb-4506-8257-8fe16cdc271a" (UID: "09971473-24eb-4506-8257-8fe16cdc271a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.280302 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "09971473-24eb-4506-8257-8fe16cdc271a" (UID: "09971473-24eb-4506-8257-8fe16cdc271a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.334373 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.334415 4719 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/09971473-24eb-4506-8257-8fe16cdc271a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.666120 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d4bdff97d-5nfdc" event={"ID":"09971473-24eb-4506-8257-8fe16cdc271a","Type":"ContainerDied","Data":"a5ae7fece151b5a0d7ff015ea3c41ec4bffaac73b10727dd6f68bb2a6c80404e"} Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.666414 4719 scope.go:117] "RemoveContainer" containerID="4169d4b379514e806fa639d8c20ae168a9d9730f32cfc29abc68fb61c4d50221" Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.666190 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d4bdff97d-5nfdc" Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.693044 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d4bdff97d-5nfdc"] Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.702243 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6d4bdff97d-5nfdc"] Nov 24 09:12:22 crc kubenswrapper[4719]: I1124 09:12:22.702751 4719 scope.go:117] "RemoveContainer" containerID="788c5d4a28562e5d24d9e87968405ccc4f9346c9a29586f3f2d680b77756ac1b" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.366813 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.468993 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-scripts\") pod \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.469072 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvjsg\" (UniqueName: \"kubernetes.io/projected/0e45bd01-4f7a-4d72-ab75-0358fb140a17-kube-api-access-fvjsg\") pod \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.469096 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-config-data-custom\") pod \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.469233 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-combined-ca-bundle\") pod \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.469311 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e45bd01-4f7a-4d72-ab75-0358fb140a17-etc-machine-id\") pod \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.469397 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-config-data\") pod \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\" (UID: \"0e45bd01-4f7a-4d72-ab75-0358fb140a17\") " Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.469550 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e45bd01-4f7a-4d72-ab75-0358fb140a17-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0e45bd01-4f7a-4d72-ab75-0358fb140a17" (UID: "0e45bd01-4f7a-4d72-ab75-0358fb140a17"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.469942 4719 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e45bd01-4f7a-4d72-ab75-0358fb140a17-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.478199 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0e45bd01-4f7a-4d72-ab75-0358fb140a17" (UID: "0e45bd01-4f7a-4d72-ab75-0358fb140a17"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.484765 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e45bd01-4f7a-4d72-ab75-0358fb140a17-kube-api-access-fvjsg" (OuterVolumeSpecName: "kube-api-access-fvjsg") pod "0e45bd01-4f7a-4d72-ab75-0358fb140a17" (UID: "0e45bd01-4f7a-4d72-ab75-0358fb140a17"). InnerVolumeSpecName "kube-api-access-fvjsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.498200 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-scripts" (OuterVolumeSpecName: "scripts") pod "0e45bd01-4f7a-4d72-ab75-0358fb140a17" (UID: "0e45bd01-4f7a-4d72-ab75-0358fb140a17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.533759 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09971473-24eb-4506-8257-8fe16cdc271a" path="/var/lib/kubelet/pods/09971473-24eb-4506-8257-8fe16cdc271a/volumes" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.571857 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.571882 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvjsg\" (UniqueName: \"kubernetes.io/projected/0e45bd01-4f7a-4d72-ab75-0358fb140a17-kube-api-access-fvjsg\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.571892 4719 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.573684 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e45bd01-4f7a-4d72-ab75-0358fb140a17" (UID: "0e45bd01-4f7a-4d72-ab75-0358fb140a17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.619883 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-config-data" (OuterVolumeSpecName: "config-data") pod "0e45bd01-4f7a-4d72-ab75-0358fb140a17" (UID: "0e45bd01-4f7a-4d72-ab75-0358fb140a17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.673396 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.673613 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e45bd01-4f7a-4d72-ab75-0358fb140a17-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.688378 4719 generic.go:334] "Generic (PLEG): container finished" podID="0e45bd01-4f7a-4d72-ab75-0358fb140a17" containerID="8f5c4bdb9a44e663f987d4edcdcc984c4684d295f9b91e84267cd5c6e714c8a3" exitCode=0 Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.688419 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e45bd01-4f7a-4d72-ab75-0358fb140a17","Type":"ContainerDied","Data":"8f5c4bdb9a44e663f987d4edcdcc984c4684d295f9b91e84267cd5c6e714c8a3"} Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.688446 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0e45bd01-4f7a-4d72-ab75-0358fb140a17","Type":"ContainerDied","Data":"3b177a4afb3f065aa20006759aad924bcd8dad5ad951cfc96ae3672d44a1a623"} Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.688484 4719 scope.go:117] "RemoveContainer" containerID="1e0b81913b9729412fad464bb1146bff5786bdd0afe5dd1bfe9d4eec501bfceb" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.688496 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.725496 4719 scope.go:117] "RemoveContainer" containerID="8f5c4bdb9a44e663f987d4edcdcc984c4684d295f9b91e84267cd5c6e714c8a3" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.755804 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.777218 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.779284 4719 scope.go:117] "RemoveContainer" containerID="1e0b81913b9729412fad464bb1146bff5786bdd0afe5dd1bfe9d4eec501bfceb" Nov 24 09:12:24 crc kubenswrapper[4719]: E1124 09:12:24.785421 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e0b81913b9729412fad464bb1146bff5786bdd0afe5dd1bfe9d4eec501bfceb\": container with ID starting with 1e0b81913b9729412fad464bb1146bff5786bdd0afe5dd1bfe9d4eec501bfceb not found: ID does not exist" containerID="1e0b81913b9729412fad464bb1146bff5786bdd0afe5dd1bfe9d4eec501bfceb" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.785473 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e0b81913b9729412fad464bb1146bff5786bdd0afe5dd1bfe9d4eec501bfceb"} err="failed to get container status \"1e0b81913b9729412fad464bb1146bff5786bdd0afe5dd1bfe9d4eec501bfceb\": rpc error: code = NotFound desc = could not find container \"1e0b81913b9729412fad464bb1146bff5786bdd0afe5dd1bfe9d4eec501bfceb\": container with ID starting with 1e0b81913b9729412fad464bb1146bff5786bdd0afe5dd1bfe9d4eec501bfceb not found: ID does not exist" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.785502 4719 scope.go:117] "RemoveContainer" containerID="8f5c4bdb9a44e663f987d4edcdcc984c4684d295f9b91e84267cd5c6e714c8a3" Nov 24 09:12:24 crc kubenswrapper[4719]: E1124 09:12:24.785764 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f5c4bdb9a44e663f987d4edcdcc984c4684d295f9b91e84267cd5c6e714c8a3\": container with ID starting with 8f5c4bdb9a44e663f987d4edcdcc984c4684d295f9b91e84267cd5c6e714c8a3 not found: ID does not exist" containerID="8f5c4bdb9a44e663f987d4edcdcc984c4684d295f9b91e84267cd5c6e714c8a3" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.785796 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f5c4bdb9a44e663f987d4edcdcc984c4684d295f9b91e84267cd5c6e714c8a3"} err="failed to get container status \"8f5c4bdb9a44e663f987d4edcdcc984c4684d295f9b91e84267cd5c6e714c8a3\": rpc error: code = NotFound desc = could not find container \"8f5c4bdb9a44e663f987d4edcdcc984c4684d295f9b91e84267cd5c6e714c8a3\": container with ID starting with 8f5c4bdb9a44e663f987d4edcdcc984c4684d295f9b91e84267cd5c6e714c8a3 not found: ID does not exist" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.802426 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:12:24 crc kubenswrapper[4719]: E1124 09:12:24.803236 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27e073e-ba9a-47c7-858a-b2a7a28e867f" containerName="init" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.803253 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27e073e-ba9a-47c7-858a-b2a7a28e867f" containerName="init" Nov 24 09:12:24 crc kubenswrapper[4719]: E1124 09:12:24.803268 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09971473-24eb-4506-8257-8fe16cdc271a" containerName="neutron-api" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.803274 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="09971473-24eb-4506-8257-8fe16cdc271a" containerName="neutron-api" Nov 24 09:12:24 crc kubenswrapper[4719]: E1124 09:12:24.803298 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e45bd01-4f7a-4d72-ab75-0358fb140a17" containerName="probe" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.803305 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e45bd01-4f7a-4d72-ab75-0358fb140a17" containerName="probe" Nov 24 09:12:24 crc kubenswrapper[4719]: E1124 09:12:24.803322 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27e073e-ba9a-47c7-858a-b2a7a28e867f" containerName="dnsmasq-dns" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.803332 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27e073e-ba9a-47c7-858a-b2a7a28e867f" containerName="dnsmasq-dns" Nov 24 09:12:24 crc kubenswrapper[4719]: E1124 09:12:24.803353 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09971473-24eb-4506-8257-8fe16cdc271a" containerName="neutron-httpd" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.803360 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="09971473-24eb-4506-8257-8fe16cdc271a" containerName="neutron-httpd" Nov 24 09:12:24 crc kubenswrapper[4719]: E1124 09:12:24.803397 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e45bd01-4f7a-4d72-ab75-0358fb140a17" containerName="cinder-scheduler" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.803406 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e45bd01-4f7a-4d72-ab75-0358fb140a17" containerName="cinder-scheduler" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.803761 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e45bd01-4f7a-4d72-ab75-0358fb140a17" containerName="cinder-scheduler" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.803802 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="09971473-24eb-4506-8257-8fe16cdc271a" containerName="neutron-httpd" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.803810 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="09971473-24eb-4506-8257-8fe16cdc271a" containerName="neutron-api" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.803827 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="f27e073e-ba9a-47c7-858a-b2a7a28e867f" containerName="dnsmasq-dns" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.803843 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e45bd01-4f7a-4d72-ab75-0358fb140a17" containerName="probe" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.806957 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.825592 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.857229 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.979497 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44ceda2d-a4e3-4606-be8b-fa3806e4be38-scripts\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.979560 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ceda2d-a4e3-4606-be8b-fa3806e4be38-config-data\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.979642 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44ceda2d-a4e3-4606-be8b-fa3806e4be38-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.979706 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44ceda2d-a4e3-4606-be8b-fa3806e4be38-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.979737 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4qbr\" (UniqueName: \"kubernetes.io/projected/44ceda2d-a4e3-4606-be8b-fa3806e4be38-kube-api-access-p4qbr\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:24 crc kubenswrapper[4719]: I1124 09:12:24.979755 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ceda2d-a4e3-4606-be8b-fa3806e4be38-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:25 crc kubenswrapper[4719]: I1124 09:12:25.081757 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44ceda2d-a4e3-4606-be8b-fa3806e4be38-scripts\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:25 crc kubenswrapper[4719]: I1124 09:12:25.081824 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ceda2d-a4e3-4606-be8b-fa3806e4be38-config-data\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:25 crc kubenswrapper[4719]: I1124 09:12:25.081859 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44ceda2d-a4e3-4606-be8b-fa3806e4be38-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:25 crc kubenswrapper[4719]: I1124 09:12:25.081920 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44ceda2d-a4e3-4606-be8b-fa3806e4be38-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:25 crc kubenswrapper[4719]: I1124 09:12:25.081965 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4qbr\" (UniqueName: \"kubernetes.io/projected/44ceda2d-a4e3-4606-be8b-fa3806e4be38-kube-api-access-p4qbr\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:25 crc kubenswrapper[4719]: I1124 09:12:25.081984 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ceda2d-a4e3-4606-be8b-fa3806e4be38-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:25 crc kubenswrapper[4719]: I1124 09:12:25.082574 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/44ceda2d-a4e3-4606-be8b-fa3806e4be38-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:25 crc kubenswrapper[4719]: I1124 09:12:25.088224 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/44ceda2d-a4e3-4606-be8b-fa3806e4be38-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:25 crc kubenswrapper[4719]: I1124 09:12:25.088245 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44ceda2d-a4e3-4606-be8b-fa3806e4be38-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:25 crc kubenswrapper[4719]: I1124 09:12:25.089721 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44ceda2d-a4e3-4606-be8b-fa3806e4be38-config-data\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:25 crc kubenswrapper[4719]: I1124 09:12:25.092522 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44ceda2d-a4e3-4606-be8b-fa3806e4be38-scripts\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:25 crc kubenswrapper[4719]: I1124 09:12:25.103912 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4qbr\" (UniqueName: \"kubernetes.io/projected/44ceda2d-a4e3-4606-be8b-fa3806e4be38-kube-api-access-p4qbr\") pod \"cinder-scheduler-0\" (UID: \"44ceda2d-a4e3-4606-be8b-fa3806e4be38\") " pod="openstack/cinder-scheduler-0" Nov 24 09:12:25 crc kubenswrapper[4719]: I1124 09:12:25.169821 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 09:12:25 crc kubenswrapper[4719]: I1124 09:12:25.744837 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.325571 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.326822 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.339688 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9992k" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.339902 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.340502 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.347982 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.405562 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf72c\" (UniqueName: \"kubernetes.io/projected/38d62700-956d-4aa3-a239-ff6fb8068ded-kube-api-access-pf72c\") pod \"openstackclient\" (UID: \"38d62700-956d-4aa3-a239-ff6fb8068ded\") " pod="openstack/openstackclient" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.405663 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38d62700-956d-4aa3-a239-ff6fb8068ded-openstack-config-secret\") pod \"openstackclient\" (UID: \"38d62700-956d-4aa3-a239-ff6fb8068ded\") " pod="openstack/openstackclient" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.405690 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38d62700-956d-4aa3-a239-ff6fb8068ded-openstack-config\") pod \"openstackclient\" (UID: \"38d62700-956d-4aa3-a239-ff6fb8068ded\") " pod="openstack/openstackclient" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.405712 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38d62700-956d-4aa3-a239-ff6fb8068ded-combined-ca-bundle\") pod \"openstackclient\" (UID: \"38d62700-956d-4aa3-a239-ff6fb8068ded\") " pod="openstack/openstackclient" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.507545 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf72c\" (UniqueName: \"kubernetes.io/projected/38d62700-956d-4aa3-a239-ff6fb8068ded-kube-api-access-pf72c\") pod \"openstackclient\" (UID: \"38d62700-956d-4aa3-a239-ff6fb8068ded\") " pod="openstack/openstackclient" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.507641 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38d62700-956d-4aa3-a239-ff6fb8068ded-openstack-config-secret\") pod \"openstackclient\" (UID: \"38d62700-956d-4aa3-a239-ff6fb8068ded\") " pod="openstack/openstackclient" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.507662 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38d62700-956d-4aa3-a239-ff6fb8068ded-openstack-config\") pod \"openstackclient\" (UID: \"38d62700-956d-4aa3-a239-ff6fb8068ded\") " pod="openstack/openstackclient" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.507679 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38d62700-956d-4aa3-a239-ff6fb8068ded-combined-ca-bundle\") pod \"openstackclient\" (UID: \"38d62700-956d-4aa3-a239-ff6fb8068ded\") " pod="openstack/openstackclient" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.509231 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38d62700-956d-4aa3-a239-ff6fb8068ded-openstack-config\") pod \"openstackclient\" (UID: \"38d62700-956d-4aa3-a239-ff6fb8068ded\") " pod="openstack/openstackclient" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.519340 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38d62700-956d-4aa3-a239-ff6fb8068ded-combined-ca-bundle\") pod \"openstackclient\" (UID: \"38d62700-956d-4aa3-a239-ff6fb8068ded\") " pod="openstack/openstackclient" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.531513 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38d62700-956d-4aa3-a239-ff6fb8068ded-openstack-config-secret\") pod \"openstackclient\" (UID: \"38d62700-956d-4aa3-a239-ff6fb8068ded\") " pod="openstack/openstackclient" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.533703 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf72c\" (UniqueName: \"kubernetes.io/projected/38d62700-956d-4aa3-a239-ff6fb8068ded-kube-api-access-pf72c\") pod \"openstackclient\" (UID: \"38d62700-956d-4aa3-a239-ff6fb8068ded\") " pod="openstack/openstackclient" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.543191 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e45bd01-4f7a-4d72-ab75-0358fb140a17" path="/var/lib/kubelet/pods/0e45bd01-4f7a-4d72-ab75-0358fb140a17/volumes" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.658572 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.763266 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"44ceda2d-a4e3-4606-be8b-fa3806e4be38","Type":"ContainerStarted","Data":"fc09d0d4ecd9a3ac2362e28b279533e5fe69adec8ab784774eb3ab4549a462d5"} Nov 24 09:12:26 crc kubenswrapper[4719]: I1124 09:12:26.764201 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"44ceda2d-a4e3-4606-be8b-fa3806e4be38","Type":"ContainerStarted","Data":"c4433ecda2795cbf3ee410bd031f2968a1b8be190a200e8b7e0fe9462639c53c"} Nov 24 09:12:27 crc kubenswrapper[4719]: I1124 09:12:27.379326 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 09:12:27 crc kubenswrapper[4719]: I1124 09:12:27.666235 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 09:12:27 crc kubenswrapper[4719]: I1124 09:12:27.781964 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"38d62700-956d-4aa3-a239-ff6fb8068ded","Type":"ContainerStarted","Data":"415b3cfb5892dd7f3561c899aad3e2fc18fc3d3a53a08d9fe9f240b2afa0edaa"} Nov 24 09:12:27 crc kubenswrapper[4719]: I1124 09:12:27.785070 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"44ceda2d-a4e3-4606-be8b-fa3806e4be38","Type":"ContainerStarted","Data":"b9458006d8fc141e50299b8c40a44c489869e624ae74c93c03560ccc74f4a93b"} Nov 24 09:12:27 crc kubenswrapper[4719]: I1124 09:12:27.812076 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.812049965 podStartE2EDuration="3.812049965s" podCreationTimestamp="2025-11-24 09:12:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:12:27.809286905 +0000 UTC m=+1124.140560187" watchObservedRunningTime="2025-11-24 09:12:27.812049965 +0000 UTC m=+1124.143323217" Nov 24 09:12:30 crc kubenswrapper[4719]: I1124 09:12:30.169983 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 09:12:34 crc kubenswrapper[4719]: I1124 09:12:34.561858 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:12:34 crc kubenswrapper[4719]: I1124 09:12:34.562546 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:12:35 crc kubenswrapper[4719]: I1124 09:12:35.400980 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 09:12:36 crc kubenswrapper[4719]: I1124 09:12:36.834765 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:36 crc kubenswrapper[4719]: I1124 09:12:36.835119 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="ceilometer-central-agent" containerID="cri-o://f358ef62736e916efc6cc1a1bfe60c8ace35c5740757ea64ae6afd0dcc910314" gracePeriod=30 Nov 24 09:12:36 crc kubenswrapper[4719]: I1124 09:12:36.835227 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="ceilometer-notification-agent" containerID="cri-o://a3447fd5b2b9278d1dad80e8aea787363e05b4a26cca9da4d0b1c159f332b1f7" gracePeriod=30 Nov 24 09:12:36 crc kubenswrapper[4719]: I1124 09:12:36.835214 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="proxy-httpd" containerID="cri-o://a11289ed34807e11015c516f087cd0d42f8bb7922cca77c8c024a27db5d61c9c" gracePeriod=30 Nov 24 09:12:36 crc kubenswrapper[4719]: I1124 09:12:36.835211 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="sg-core" containerID="cri-o://352671d279562bea8ef636c2db78d03b05ce12558063ed60276840230cfffc51" gracePeriod=30 Nov 24 09:12:36 crc kubenswrapper[4719]: I1124 09:12:36.855011 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.154:3000/\": EOF" Nov 24 09:12:37 crc kubenswrapper[4719]: I1124 09:12:37.885956 4719 generic.go:334] "Generic (PLEG): container finished" podID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerID="a11289ed34807e11015c516f087cd0d42f8bb7922cca77c8c024a27db5d61c9c" exitCode=0 Nov 24 09:12:37 crc kubenswrapper[4719]: I1124 09:12:37.886246 4719 generic.go:334] "Generic (PLEG): container finished" podID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerID="352671d279562bea8ef636c2db78d03b05ce12558063ed60276840230cfffc51" exitCode=2 Nov 24 09:12:37 crc kubenswrapper[4719]: I1124 09:12:37.886028 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37b1cb81-9588-4efb-8f22-a3e089ae4402","Type":"ContainerDied","Data":"a11289ed34807e11015c516f087cd0d42f8bb7922cca77c8c024a27db5d61c9c"} Nov 24 09:12:37 crc kubenswrapper[4719]: I1124 09:12:37.886283 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37b1cb81-9588-4efb-8f22-a3e089ae4402","Type":"ContainerDied","Data":"352671d279562bea8ef636c2db78d03b05ce12558063ed60276840230cfffc51"} Nov 24 09:12:38 crc kubenswrapper[4719]: I1124 09:12:38.895975 4719 generic.go:334] "Generic (PLEG): container finished" podID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerID="f358ef62736e916efc6cc1a1bfe60c8ace35c5740757ea64ae6afd0dcc910314" exitCode=0 Nov 24 09:12:38 crc kubenswrapper[4719]: I1124 09:12:38.896122 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37b1cb81-9588-4efb-8f22-a3e089ae4402","Type":"ContainerDied","Data":"f358ef62736e916efc6cc1a1bfe60c8ace35c5740757ea64ae6afd0dcc910314"} Nov 24 09:12:39 crc kubenswrapper[4719]: I1124 09:12:39.907370 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"38d62700-956d-4aa3-a239-ff6fb8068ded","Type":"ContainerStarted","Data":"73d94713f0c44650ae76efbf71861f688d5a3a717613358cbd349da99874a7cd"} Nov 24 09:12:39 crc kubenswrapper[4719]: I1124 09:12:39.934721 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.6520580000000002 podStartE2EDuration="13.934698706s" podCreationTimestamp="2025-11-24 09:12:26 +0000 UTC" firstStartedPulling="2025-11-24 09:12:27.407630189 +0000 UTC m=+1123.738903451" lastFinishedPulling="2025-11-24 09:12:38.690270905 +0000 UTC m=+1135.021544157" observedRunningTime="2025-11-24 09:12:39.926523489 +0000 UTC m=+1136.257796741" watchObservedRunningTime="2025-11-24 09:12:39.934698706 +0000 UTC m=+1136.265971958" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.445759 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.617563 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-config-data\") pod \"37b1cb81-9588-4efb-8f22-a3e089ae4402\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.617643 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37b1cb81-9588-4efb-8f22-a3e089ae4402-run-httpd\") pod \"37b1cb81-9588-4efb-8f22-a3e089ae4402\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.617660 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37b1cb81-9588-4efb-8f22-a3e089ae4402-log-httpd\") pod \"37b1cb81-9588-4efb-8f22-a3e089ae4402\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.617694 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-combined-ca-bundle\") pod \"37b1cb81-9588-4efb-8f22-a3e089ae4402\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.617765 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvvv8\" (UniqueName: \"kubernetes.io/projected/37b1cb81-9588-4efb-8f22-a3e089ae4402-kube-api-access-hvvv8\") pod \"37b1cb81-9588-4efb-8f22-a3e089ae4402\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.617823 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-sg-core-conf-yaml\") pod \"37b1cb81-9588-4efb-8f22-a3e089ae4402\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.617882 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-scripts\") pod \"37b1cb81-9588-4efb-8f22-a3e089ae4402\" (UID: \"37b1cb81-9588-4efb-8f22-a3e089ae4402\") " Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.618077 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37b1cb81-9588-4efb-8f22-a3e089ae4402-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "37b1cb81-9588-4efb-8f22-a3e089ae4402" (UID: "37b1cb81-9588-4efb-8f22-a3e089ae4402"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.618382 4719 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37b1cb81-9588-4efb-8f22-a3e089ae4402-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.618506 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37b1cb81-9588-4efb-8f22-a3e089ae4402-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "37b1cb81-9588-4efb-8f22-a3e089ae4402" (UID: "37b1cb81-9588-4efb-8f22-a3e089ae4402"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.623415 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b1cb81-9588-4efb-8f22-a3e089ae4402-kube-api-access-hvvv8" (OuterVolumeSpecName: "kube-api-access-hvvv8") pod "37b1cb81-9588-4efb-8f22-a3e089ae4402" (UID: "37b1cb81-9588-4efb-8f22-a3e089ae4402"). InnerVolumeSpecName "kube-api-access-hvvv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.625092 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-scripts" (OuterVolumeSpecName: "scripts") pod "37b1cb81-9588-4efb-8f22-a3e089ae4402" (UID: "37b1cb81-9588-4efb-8f22-a3e089ae4402"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.662978 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "37b1cb81-9588-4efb-8f22-a3e089ae4402" (UID: "37b1cb81-9588-4efb-8f22-a3e089ae4402"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.694581 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37b1cb81-9588-4efb-8f22-a3e089ae4402" (UID: "37b1cb81-9588-4efb-8f22-a3e089ae4402"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.720681 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-config-data" (OuterVolumeSpecName: "config-data") pod "37b1cb81-9588-4efb-8f22-a3e089ae4402" (UID: "37b1cb81-9588-4efb-8f22-a3e089ae4402"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.721649 4719 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37b1cb81-9588-4efb-8f22-a3e089ae4402-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.721740 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.721756 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvvv8\" (UniqueName: \"kubernetes.io/projected/37b1cb81-9588-4efb-8f22-a3e089ae4402-kube-api-access-hvvv8\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.721769 4719 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.721777 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.721786 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37b1cb81-9588-4efb-8f22-a3e089ae4402-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.925684 4719 generic.go:334] "Generic (PLEG): container finished" podID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerID="a3447fd5b2b9278d1dad80e8aea787363e05b4a26cca9da4d0b1c159f332b1f7" exitCode=0 Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.925727 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37b1cb81-9588-4efb-8f22-a3e089ae4402","Type":"ContainerDied","Data":"a3447fd5b2b9278d1dad80e8aea787363e05b4a26cca9da4d0b1c159f332b1f7"} Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.925756 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37b1cb81-9588-4efb-8f22-a3e089ae4402","Type":"ContainerDied","Data":"3234e15b4bd7defe249ca824f05fc5e1271793ffa720bafdaf948bb92492fd0f"} Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.925764 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.925774 4719 scope.go:117] "RemoveContainer" containerID="a11289ed34807e11015c516f087cd0d42f8bb7922cca77c8c024a27db5d61c9c" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.946216 4719 scope.go:117] "RemoveContainer" containerID="352671d279562bea8ef636c2db78d03b05ce12558063ed60276840230cfffc51" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.960660 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.966515 4719 scope.go:117] "RemoveContainer" containerID="a3447fd5b2b9278d1dad80e8aea787363e05b4a26cca9da4d0b1c159f332b1f7" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.972704 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.991961 4719 scope.go:117] "RemoveContainer" containerID="f358ef62736e916efc6cc1a1bfe60c8ace35c5740757ea64ae6afd0dcc910314" Nov 24 09:12:41 crc kubenswrapper[4719]: I1124 09:12:41.999438 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:42 crc kubenswrapper[4719]: E1124 09:12:42.000136 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="ceilometer-central-agent" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.000159 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="ceilometer-central-agent" Nov 24 09:12:42 crc kubenswrapper[4719]: E1124 09:12:42.000173 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="sg-core" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.000179 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="sg-core" Nov 24 09:12:42 crc kubenswrapper[4719]: E1124 09:12:42.000188 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="ceilometer-notification-agent" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.000196 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="ceilometer-notification-agent" Nov 24 09:12:42 crc kubenswrapper[4719]: E1124 09:12:42.000213 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="proxy-httpd" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.000219 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="proxy-httpd" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.000378 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="sg-core" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.000388 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="ceilometer-central-agent" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.000404 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="ceilometer-notification-agent" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.000415 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" containerName="proxy-httpd" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.002010 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.004826 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.005195 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.018452 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.029409 4719 scope.go:117] "RemoveContainer" containerID="a11289ed34807e11015c516f087cd0d42f8bb7922cca77c8c024a27db5d61c9c" Nov 24 09:12:42 crc kubenswrapper[4719]: E1124 09:12:42.039464 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a11289ed34807e11015c516f087cd0d42f8bb7922cca77c8c024a27db5d61c9c\": container with ID starting with a11289ed34807e11015c516f087cd0d42f8bb7922cca77c8c024a27db5d61c9c not found: ID does not exist" containerID="a11289ed34807e11015c516f087cd0d42f8bb7922cca77c8c024a27db5d61c9c" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.039524 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a11289ed34807e11015c516f087cd0d42f8bb7922cca77c8c024a27db5d61c9c"} err="failed to get container status \"a11289ed34807e11015c516f087cd0d42f8bb7922cca77c8c024a27db5d61c9c\": rpc error: code = NotFound desc = could not find container \"a11289ed34807e11015c516f087cd0d42f8bb7922cca77c8c024a27db5d61c9c\": container with ID starting with a11289ed34807e11015c516f087cd0d42f8bb7922cca77c8c024a27db5d61c9c not found: ID does not exist" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.039560 4719 scope.go:117] "RemoveContainer" containerID="352671d279562bea8ef636c2db78d03b05ce12558063ed60276840230cfffc51" Nov 24 09:12:42 crc kubenswrapper[4719]: E1124 09:12:42.040477 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"352671d279562bea8ef636c2db78d03b05ce12558063ed60276840230cfffc51\": container with ID starting with 352671d279562bea8ef636c2db78d03b05ce12558063ed60276840230cfffc51 not found: ID does not exist" containerID="352671d279562bea8ef636c2db78d03b05ce12558063ed60276840230cfffc51" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.040587 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"352671d279562bea8ef636c2db78d03b05ce12558063ed60276840230cfffc51"} err="failed to get container status \"352671d279562bea8ef636c2db78d03b05ce12558063ed60276840230cfffc51\": rpc error: code = NotFound desc = could not find container \"352671d279562bea8ef636c2db78d03b05ce12558063ed60276840230cfffc51\": container with ID starting with 352671d279562bea8ef636c2db78d03b05ce12558063ed60276840230cfffc51 not found: ID does not exist" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.040668 4719 scope.go:117] "RemoveContainer" containerID="a3447fd5b2b9278d1dad80e8aea787363e05b4a26cca9da4d0b1c159f332b1f7" Nov 24 09:12:42 crc kubenswrapper[4719]: E1124 09:12:42.041010 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3447fd5b2b9278d1dad80e8aea787363e05b4a26cca9da4d0b1c159f332b1f7\": container with ID starting with a3447fd5b2b9278d1dad80e8aea787363e05b4a26cca9da4d0b1c159f332b1f7 not found: ID does not exist" containerID="a3447fd5b2b9278d1dad80e8aea787363e05b4a26cca9da4d0b1c159f332b1f7" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.041069 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3447fd5b2b9278d1dad80e8aea787363e05b4a26cca9da4d0b1c159f332b1f7"} err="failed to get container status \"a3447fd5b2b9278d1dad80e8aea787363e05b4a26cca9da4d0b1c159f332b1f7\": rpc error: code = NotFound desc = could not find container \"a3447fd5b2b9278d1dad80e8aea787363e05b4a26cca9da4d0b1c159f332b1f7\": container with ID starting with a3447fd5b2b9278d1dad80e8aea787363e05b4a26cca9da4d0b1c159f332b1f7 not found: ID does not exist" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.041092 4719 scope.go:117] "RemoveContainer" containerID="f358ef62736e916efc6cc1a1bfe60c8ace35c5740757ea64ae6afd0dcc910314" Nov 24 09:12:42 crc kubenswrapper[4719]: E1124 09:12:42.041691 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f358ef62736e916efc6cc1a1bfe60c8ace35c5740757ea64ae6afd0dcc910314\": container with ID starting with f358ef62736e916efc6cc1a1bfe60c8ace35c5740757ea64ae6afd0dcc910314 not found: ID does not exist" containerID="f358ef62736e916efc6cc1a1bfe60c8ace35c5740757ea64ae6afd0dcc910314" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.041786 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f358ef62736e916efc6cc1a1bfe60c8ace35c5740757ea64ae6afd0dcc910314"} err="failed to get container status \"f358ef62736e916efc6cc1a1bfe60c8ace35c5740757ea64ae6afd0dcc910314\": rpc error: code = NotFound desc = could not find container \"f358ef62736e916efc6cc1a1bfe60c8ace35c5740757ea64ae6afd0dcc910314\": container with ID starting with f358ef62736e916efc6cc1a1bfe60c8ace35c5740757ea64ae6afd0dcc910314 not found: ID does not exist" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.131872 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-scripts\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.131955 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3160535e-8087-4bbe-a69a-e586fa734825-log-httpd\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.131983 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.132077 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk8dh\" (UniqueName: \"kubernetes.io/projected/3160535e-8087-4bbe-a69a-e586fa734825-kube-api-access-gk8dh\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.132124 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.132160 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-config-data\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.132178 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3160535e-8087-4bbe-a69a-e586fa734825-run-httpd\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.234197 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk8dh\" (UniqueName: \"kubernetes.io/projected/3160535e-8087-4bbe-a69a-e586fa734825-kube-api-access-gk8dh\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.234290 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.235004 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-config-data\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.235076 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3160535e-8087-4bbe-a69a-e586fa734825-run-httpd\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.235504 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3160535e-8087-4bbe-a69a-e586fa734825-run-httpd\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.235610 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-scripts\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.235667 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3160535e-8087-4bbe-a69a-e586fa734825-log-httpd\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.235703 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.236567 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3160535e-8087-4bbe-a69a-e586fa734825-log-httpd\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.239990 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-scripts\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.243277 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.245255 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-config-data\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.252153 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.256775 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk8dh\" (UniqueName: \"kubernetes.io/projected/3160535e-8087-4bbe-a69a-e586fa734825-kube-api-access-gk8dh\") pod \"ceilometer-0\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.332856 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.530328 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37b1cb81-9588-4efb-8f22-a3e089ae4402" path="/var/lib/kubelet/pods/37b1cb81-9588-4efb-8f22-a3e089ae4402/volumes" Nov 24 09:12:42 crc kubenswrapper[4719]: I1124 09:12:42.961632 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:42 crc kubenswrapper[4719]: W1124 09:12:42.963669 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3160535e_8087_4bbe_a69a_e586fa734825.slice/crio-257c6cac3e0eed6cc2a0ce1020b1ebde7e935b6af193bb4b17d9f2a35b4b187f WatchSource:0}: Error finding container 257c6cac3e0eed6cc2a0ce1020b1ebde7e935b6af193bb4b17d9f2a35b4b187f: Status 404 returned error can't find the container with id 257c6cac3e0eed6cc2a0ce1020b1ebde7e935b6af193bb4b17d9f2a35b4b187f Nov 24 09:12:43 crc kubenswrapper[4719]: I1124 09:12:43.958844 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3160535e-8087-4bbe-a69a-e586fa734825","Type":"ContainerStarted","Data":"7fc42c7c86a587a0ed0efe8aab8087fd330fd8c28158791cd55e6f654d7ef46b"} Nov 24 09:12:43 crc kubenswrapper[4719]: I1124 09:12:43.959152 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3160535e-8087-4bbe-a69a-e586fa734825","Type":"ContainerStarted","Data":"257c6cac3e0eed6cc2a0ce1020b1ebde7e935b6af193bb4b17d9f2a35b4b187f"} Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.675206 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-xhrh9"] Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.676401 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xhrh9"] Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.676478 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-qkjqj"] Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.677256 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qkjqj" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.677361 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xhrh9" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.687306 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qkjqj"] Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.776760 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec8d7749-fbf6-4898-bebf-8df0fe88d0fa-operator-scripts\") pod \"nova-cell0-db-create-qkjqj\" (UID: \"ec8d7749-fbf6-4898-bebf-8df0fe88d0fa\") " pod="openstack/nova-cell0-db-create-qkjqj" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.777116 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a284fb9e-518e-4ae6-b20b-8016ed5eef59-operator-scripts\") pod \"nova-api-db-create-xhrh9\" (UID: \"a284fb9e-518e-4ae6-b20b-8016ed5eef59\") " pod="openstack/nova-api-db-create-xhrh9" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.777137 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74tgg\" (UniqueName: \"kubernetes.io/projected/a284fb9e-518e-4ae6-b20b-8016ed5eef59-kube-api-access-74tgg\") pod \"nova-api-db-create-xhrh9\" (UID: \"a284fb9e-518e-4ae6-b20b-8016ed5eef59\") " pod="openstack/nova-api-db-create-xhrh9" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.777210 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtbwk\" (UniqueName: \"kubernetes.io/projected/ec8d7749-fbf6-4898-bebf-8df0fe88d0fa-kube-api-access-xtbwk\") pod \"nova-cell0-db-create-qkjqj\" (UID: \"ec8d7749-fbf6-4898-bebf-8df0fe88d0fa\") " pod="openstack/nova-cell0-db-create-qkjqj" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.784369 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1017-account-create-vg8zv"] Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.785613 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1017-account-create-vg8zv" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.789342 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.800652 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1017-account-create-vg8zv"] Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.876455 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-s6jf5"] Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.877513 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6jf5" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.878401 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgd9v\" (UniqueName: \"kubernetes.io/projected/9e484202-a53f-45ea-a78e-a596ab07ff66-kube-api-access-fgd9v\") pod \"nova-api-1017-account-create-vg8zv\" (UID: \"9e484202-a53f-45ea-a78e-a596ab07ff66\") " pod="openstack/nova-api-1017-account-create-vg8zv" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.878469 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtbwk\" (UniqueName: \"kubernetes.io/projected/ec8d7749-fbf6-4898-bebf-8df0fe88d0fa-kube-api-access-xtbwk\") pod \"nova-cell0-db-create-qkjqj\" (UID: \"ec8d7749-fbf6-4898-bebf-8df0fe88d0fa\") " pod="openstack/nova-cell0-db-create-qkjqj" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.878547 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e484202-a53f-45ea-a78e-a596ab07ff66-operator-scripts\") pod \"nova-api-1017-account-create-vg8zv\" (UID: \"9e484202-a53f-45ea-a78e-a596ab07ff66\") " pod="openstack/nova-api-1017-account-create-vg8zv" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.878580 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec8d7749-fbf6-4898-bebf-8df0fe88d0fa-operator-scripts\") pod \"nova-cell0-db-create-qkjqj\" (UID: \"ec8d7749-fbf6-4898-bebf-8df0fe88d0fa\") " pod="openstack/nova-cell0-db-create-qkjqj" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.878696 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a284fb9e-518e-4ae6-b20b-8016ed5eef59-operator-scripts\") pod \"nova-api-db-create-xhrh9\" (UID: \"a284fb9e-518e-4ae6-b20b-8016ed5eef59\") " pod="openstack/nova-api-db-create-xhrh9" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.878737 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74tgg\" (UniqueName: \"kubernetes.io/projected/a284fb9e-518e-4ae6-b20b-8016ed5eef59-kube-api-access-74tgg\") pod \"nova-api-db-create-xhrh9\" (UID: \"a284fb9e-518e-4ae6-b20b-8016ed5eef59\") " pod="openstack/nova-api-db-create-xhrh9" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.879573 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec8d7749-fbf6-4898-bebf-8df0fe88d0fa-operator-scripts\") pod \"nova-cell0-db-create-qkjqj\" (UID: \"ec8d7749-fbf6-4898-bebf-8df0fe88d0fa\") " pod="openstack/nova-cell0-db-create-qkjqj" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.879815 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a284fb9e-518e-4ae6-b20b-8016ed5eef59-operator-scripts\") pod \"nova-api-db-create-xhrh9\" (UID: \"a284fb9e-518e-4ae6-b20b-8016ed5eef59\") " pod="openstack/nova-api-db-create-xhrh9" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.890494 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-s6jf5"] Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.913784 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtbwk\" (UniqueName: \"kubernetes.io/projected/ec8d7749-fbf6-4898-bebf-8df0fe88d0fa-kube-api-access-xtbwk\") pod \"nova-cell0-db-create-qkjqj\" (UID: \"ec8d7749-fbf6-4898-bebf-8df0fe88d0fa\") " pod="openstack/nova-cell0-db-create-qkjqj" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.914165 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74tgg\" (UniqueName: \"kubernetes.io/projected/a284fb9e-518e-4ae6-b20b-8016ed5eef59-kube-api-access-74tgg\") pod \"nova-api-db-create-xhrh9\" (UID: \"a284fb9e-518e-4ae6-b20b-8016ed5eef59\") " pod="openstack/nova-api-db-create-xhrh9" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.980108 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgd9v\" (UniqueName: \"kubernetes.io/projected/9e484202-a53f-45ea-a78e-a596ab07ff66-kube-api-access-fgd9v\") pod \"nova-api-1017-account-create-vg8zv\" (UID: \"9e484202-a53f-45ea-a78e-a596ab07ff66\") " pod="openstack/nova-api-1017-account-create-vg8zv" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.980805 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwpv2\" (UniqueName: \"kubernetes.io/projected/496251b9-2f65-457d-b68a-84d23bc3b05c-kube-api-access-nwpv2\") pod \"nova-cell1-db-create-s6jf5\" (UID: \"496251b9-2f65-457d-b68a-84d23bc3b05c\") " pod="openstack/nova-cell1-db-create-s6jf5" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.980935 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e484202-a53f-45ea-a78e-a596ab07ff66-operator-scripts\") pod \"nova-api-1017-account-create-vg8zv\" (UID: \"9e484202-a53f-45ea-a78e-a596ab07ff66\") " pod="openstack/nova-api-1017-account-create-vg8zv" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.981013 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/496251b9-2f65-457d-b68a-84d23bc3b05c-operator-scripts\") pod \"nova-cell1-db-create-s6jf5\" (UID: \"496251b9-2f65-457d-b68a-84d23bc3b05c\") " pod="openstack/nova-cell1-db-create-s6jf5" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.988538 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e484202-a53f-45ea-a78e-a596ab07ff66-operator-scripts\") pod \"nova-api-1017-account-create-vg8zv\" (UID: \"9e484202-a53f-45ea-a78e-a596ab07ff66\") " pod="openstack/nova-api-1017-account-create-vg8zv" Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.989637 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3160535e-8087-4bbe-a69a-e586fa734825","Type":"ContainerStarted","Data":"03545927ad1fd28a09891fa879432a2a0b76f00ebe7a275c8964293f20783476"} Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.994368 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-e450-account-create-xv7lm"] Nov 24 09:12:44 crc kubenswrapper[4719]: I1124 09:12:44.995975 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e450-account-create-xv7lm" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.001833 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.005142 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e450-account-create-xv7lm"] Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.007495 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgd9v\" (UniqueName: \"kubernetes.io/projected/9e484202-a53f-45ea-a78e-a596ab07ff66-kube-api-access-fgd9v\") pod \"nova-api-1017-account-create-vg8zv\" (UID: \"9e484202-a53f-45ea-a78e-a596ab07ff66\") " pod="openstack/nova-api-1017-account-create-vg8zv" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.024174 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qkjqj" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.027650 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xhrh9" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.082068 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwpv2\" (UniqueName: \"kubernetes.io/projected/496251b9-2f65-457d-b68a-84d23bc3b05c-kube-api-access-nwpv2\") pod \"nova-cell1-db-create-s6jf5\" (UID: \"496251b9-2f65-457d-b68a-84d23bc3b05c\") " pod="openstack/nova-cell1-db-create-s6jf5" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.082168 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/496251b9-2f65-457d-b68a-84d23bc3b05c-operator-scripts\") pod \"nova-cell1-db-create-s6jf5\" (UID: \"496251b9-2f65-457d-b68a-84d23bc3b05c\") " pod="openstack/nova-cell1-db-create-s6jf5" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.082234 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad086a9d-061e-45a5-8364-758c44b03485-operator-scripts\") pod \"nova-cell0-e450-account-create-xv7lm\" (UID: \"ad086a9d-061e-45a5-8364-758c44b03485\") " pod="openstack/nova-cell0-e450-account-create-xv7lm" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.082300 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btx6d\" (UniqueName: \"kubernetes.io/projected/ad086a9d-061e-45a5-8364-758c44b03485-kube-api-access-btx6d\") pod \"nova-cell0-e450-account-create-xv7lm\" (UID: \"ad086a9d-061e-45a5-8364-758c44b03485\") " pod="openstack/nova-cell0-e450-account-create-xv7lm" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.083213 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/496251b9-2f65-457d-b68a-84d23bc3b05c-operator-scripts\") pod \"nova-cell1-db-create-s6jf5\" (UID: \"496251b9-2f65-457d-b68a-84d23bc3b05c\") " pod="openstack/nova-cell1-db-create-s6jf5" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.105100 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwpv2\" (UniqueName: \"kubernetes.io/projected/496251b9-2f65-457d-b68a-84d23bc3b05c-kube-api-access-nwpv2\") pod \"nova-cell1-db-create-s6jf5\" (UID: \"496251b9-2f65-457d-b68a-84d23bc3b05c\") " pod="openstack/nova-cell1-db-create-s6jf5" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.113318 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1017-account-create-vg8zv" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.183917 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btx6d\" (UniqueName: \"kubernetes.io/projected/ad086a9d-061e-45a5-8364-758c44b03485-kube-api-access-btx6d\") pod \"nova-cell0-e450-account-create-xv7lm\" (UID: \"ad086a9d-061e-45a5-8364-758c44b03485\") " pod="openstack/nova-cell0-e450-account-create-xv7lm" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.184150 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad086a9d-061e-45a5-8364-758c44b03485-operator-scripts\") pod \"nova-cell0-e450-account-create-xv7lm\" (UID: \"ad086a9d-061e-45a5-8364-758c44b03485\") " pod="openstack/nova-cell0-e450-account-create-xv7lm" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.186059 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad086a9d-061e-45a5-8364-758c44b03485-operator-scripts\") pod \"nova-cell0-e450-account-create-xv7lm\" (UID: \"ad086a9d-061e-45a5-8364-758c44b03485\") " pod="openstack/nova-cell0-e450-account-create-xv7lm" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.204057 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6jf5" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.207744 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btx6d\" (UniqueName: \"kubernetes.io/projected/ad086a9d-061e-45a5-8364-758c44b03485-kube-api-access-btx6d\") pod \"nova-cell0-e450-account-create-xv7lm\" (UID: \"ad086a9d-061e-45a5-8364-758c44b03485\") " pod="openstack/nova-cell0-e450-account-create-xv7lm" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.305990 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c34a-account-create-z5z27"] Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.310311 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c34a-account-create-z5z27" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.314212 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.334268 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e450-account-create-xv7lm" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.401027 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c34a-account-create-z5z27"] Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.501026 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f766839a-efee-4eb9-bfa5-ba2d5329af55-operator-scripts\") pod \"nova-cell1-c34a-account-create-z5z27\" (UID: \"f766839a-efee-4eb9-bfa5-ba2d5329af55\") " pod="openstack/nova-cell1-c34a-account-create-z5z27" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.501183 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cr2k\" (UniqueName: \"kubernetes.io/projected/f766839a-efee-4eb9-bfa5-ba2d5329af55-kube-api-access-9cr2k\") pod \"nova-cell1-c34a-account-create-z5z27\" (UID: \"f766839a-efee-4eb9-bfa5-ba2d5329af55\") " pod="openstack/nova-cell1-c34a-account-create-z5z27" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.604004 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cr2k\" (UniqueName: \"kubernetes.io/projected/f766839a-efee-4eb9-bfa5-ba2d5329af55-kube-api-access-9cr2k\") pod \"nova-cell1-c34a-account-create-z5z27\" (UID: \"f766839a-efee-4eb9-bfa5-ba2d5329af55\") " pod="openstack/nova-cell1-c34a-account-create-z5z27" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.604328 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f766839a-efee-4eb9-bfa5-ba2d5329af55-operator-scripts\") pod \"nova-cell1-c34a-account-create-z5z27\" (UID: \"f766839a-efee-4eb9-bfa5-ba2d5329af55\") " pod="openstack/nova-cell1-c34a-account-create-z5z27" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.604998 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f766839a-efee-4eb9-bfa5-ba2d5329af55-operator-scripts\") pod \"nova-cell1-c34a-account-create-z5z27\" (UID: \"f766839a-efee-4eb9-bfa5-ba2d5329af55\") " pod="openstack/nova-cell1-c34a-account-create-z5z27" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.654673 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cr2k\" (UniqueName: \"kubernetes.io/projected/f766839a-efee-4eb9-bfa5-ba2d5329af55-kube-api-access-9cr2k\") pod \"nova-cell1-c34a-account-create-z5z27\" (UID: \"f766839a-efee-4eb9-bfa5-ba2d5329af55\") " pod="openstack/nova-cell1-c34a-account-create-z5z27" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.676694 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qkjqj"] Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.773617 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xhrh9"] Nov 24 09:12:45 crc kubenswrapper[4719]: W1124 09:12:45.800157 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda284fb9e_518e_4ae6_b20b_8016ed5eef59.slice/crio-86ffcd3691a09682628bd56ccdaacda73ac850ac99eba8d2e4ea531e86adef59 WatchSource:0}: Error finding container 86ffcd3691a09682628bd56ccdaacda73ac850ac99eba8d2e4ea531e86adef59: Status 404 returned error can't find the container with id 86ffcd3691a09682628bd56ccdaacda73ac850ac99eba8d2e4ea531e86adef59 Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.957644 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c34a-account-create-z5z27" Nov 24 09:12:45 crc kubenswrapper[4719]: I1124 09:12:45.978299 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1017-account-create-vg8zv"] Nov 24 09:12:46 crc kubenswrapper[4719]: I1124 09:12:46.005836 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qkjqj" event={"ID":"ec8d7749-fbf6-4898-bebf-8df0fe88d0fa","Type":"ContainerStarted","Data":"aeefc604e3ee1b8c2a802225511745aab5d3a20daf07823be1e6cbde76a7f64c"} Nov 24 09:12:46 crc kubenswrapper[4719]: I1124 09:12:46.007691 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-s6jf5"] Nov 24 09:12:46 crc kubenswrapper[4719]: I1124 09:12:46.013819 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3160535e-8087-4bbe-a69a-e586fa734825","Type":"ContainerStarted","Data":"d41c6cb18633057f7b541d63406248fc193811764cf93e42767185c63805fb47"} Nov 24 09:12:46 crc kubenswrapper[4719]: I1124 09:12:46.026359 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xhrh9" event={"ID":"a284fb9e-518e-4ae6-b20b-8016ed5eef59","Type":"ContainerStarted","Data":"86ffcd3691a09682628bd56ccdaacda73ac850ac99eba8d2e4ea531e86adef59"} Nov 24 09:12:46 crc kubenswrapper[4719]: I1124 09:12:46.195328 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e450-account-create-xv7lm"] Nov 24 09:12:46 crc kubenswrapper[4719]: I1124 09:12:46.663940 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c34a-account-create-z5z27"] Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.035199 4719 generic.go:334] "Generic (PLEG): container finished" podID="9e484202-a53f-45ea-a78e-a596ab07ff66" containerID="b90cb70c9465eb8b791b70f06bbc6f52a19e1701400a951e5d67552f0e477d9b" exitCode=0 Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.035244 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1017-account-create-vg8zv" event={"ID":"9e484202-a53f-45ea-a78e-a596ab07ff66","Type":"ContainerDied","Data":"b90cb70c9465eb8b791b70f06bbc6f52a19e1701400a951e5d67552f0e477d9b"} Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.035282 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1017-account-create-vg8zv" event={"ID":"9e484202-a53f-45ea-a78e-a596ab07ff66","Type":"ContainerStarted","Data":"c8a8c4750fdc0deda7c32e7e63bc646f5cabc2278cb256dc6317e852e64c8885"} Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.036492 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c34a-account-create-z5z27" event={"ID":"f766839a-efee-4eb9-bfa5-ba2d5329af55","Type":"ContainerStarted","Data":"5b497addf26c8af27f377a92489dab3d2d3ccd6764a24de70fc4ea6cc4a16257"} Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.036731 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c34a-account-create-z5z27" event={"ID":"f766839a-efee-4eb9-bfa5-ba2d5329af55","Type":"ContainerStarted","Data":"a87688ab3d7494e4d00dc390c35403878316f7d7c2587c8100abb8c3d38718fd"} Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.038218 4719 generic.go:334] "Generic (PLEG): container finished" podID="ad086a9d-061e-45a5-8364-758c44b03485" containerID="b960c4b24d5aa8e2656489943ae260231f26a7a4e5b4ce3959ac8197f6bb4a05" exitCode=0 Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.038254 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e450-account-create-xv7lm" event={"ID":"ad086a9d-061e-45a5-8364-758c44b03485","Type":"ContainerDied","Data":"b960c4b24d5aa8e2656489943ae260231f26a7a4e5b4ce3959ac8197f6bb4a05"} Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.038300 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e450-account-create-xv7lm" event={"ID":"ad086a9d-061e-45a5-8364-758c44b03485","Type":"ContainerStarted","Data":"ad8a71239c7c7b0ff59dfa5562d4e7ba107b8918e1c38e9e274176dc203f8782"} Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.039448 4719 generic.go:334] "Generic (PLEG): container finished" podID="a284fb9e-518e-4ae6-b20b-8016ed5eef59" containerID="a78e774427a1be728f20ae8faa783eabbcf28a272874cc10dd7bb0bd4f53f69a" exitCode=0 Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.039524 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xhrh9" event={"ID":"a284fb9e-518e-4ae6-b20b-8016ed5eef59","Type":"ContainerDied","Data":"a78e774427a1be728f20ae8faa783eabbcf28a272874cc10dd7bb0bd4f53f69a"} Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.040899 4719 generic.go:334] "Generic (PLEG): container finished" podID="ec8d7749-fbf6-4898-bebf-8df0fe88d0fa" containerID="bc75821ffacf307a4f0cc64398940237c2c3259437457f4bab23989221e1d80e" exitCode=0 Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.040987 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qkjqj" event={"ID":"ec8d7749-fbf6-4898-bebf-8df0fe88d0fa","Type":"ContainerDied","Data":"bc75821ffacf307a4f0cc64398940237c2c3259437457f4bab23989221e1d80e"} Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.042013 4719 generic.go:334] "Generic (PLEG): container finished" podID="496251b9-2f65-457d-b68a-84d23bc3b05c" containerID="195a8c333514f59c03658a4f78a5231b7ec69e5dd788519b8aa6b679c0ee0ee1" exitCode=0 Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.042060 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6jf5" event={"ID":"496251b9-2f65-457d-b68a-84d23bc3b05c","Type":"ContainerDied","Data":"195a8c333514f59c03658a4f78a5231b7ec69e5dd788519b8aa6b679c0ee0ee1"} Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.042082 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6jf5" event={"ID":"496251b9-2f65-457d-b68a-84d23bc3b05c","Type":"ContainerStarted","Data":"be32d6fa8d8ca9b47e80c30d9ebac813bca5eadd004d7a27b22c293c9cad9078"} Nov 24 09:12:47 crc kubenswrapper[4719]: I1124 09:12:47.120524 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-c34a-account-create-z5z27" podStartSLOduration=2.120492909 podStartE2EDuration="2.120492909s" podCreationTimestamp="2025-11-24 09:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:12:47.117623986 +0000 UTC m=+1143.448897248" watchObservedRunningTime="2025-11-24 09:12:47.120492909 +0000 UTC m=+1143.451766161" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.052185 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3160535e-8087-4bbe-a69a-e586fa734825","Type":"ContainerStarted","Data":"8fe57bd90a844df1d7e8fda78ba86b2d321c47102563cf30c66e09bad452eda1"} Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.054026 4719 generic.go:334] "Generic (PLEG): container finished" podID="f766839a-efee-4eb9-bfa5-ba2d5329af55" containerID="5b497addf26c8af27f377a92489dab3d2d3ccd6764a24de70fc4ea6cc4a16257" exitCode=0 Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.054077 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c34a-account-create-z5z27" event={"ID":"f766839a-efee-4eb9-bfa5-ba2d5329af55","Type":"ContainerDied","Data":"5b497addf26c8af27f377a92489dab3d2d3ccd6764a24de70fc4ea6cc4a16257"} Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.074222 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.825586387 podStartE2EDuration="7.07420707s" podCreationTimestamp="2025-11-24 09:12:41 +0000 UTC" firstStartedPulling="2025-11-24 09:12:42.966136739 +0000 UTC m=+1139.297409991" lastFinishedPulling="2025-11-24 09:12:47.214757432 +0000 UTC m=+1143.546030674" observedRunningTime="2025-11-24 09:12:48.073662374 +0000 UTC m=+1144.404935646" watchObservedRunningTime="2025-11-24 09:12:48.07420707 +0000 UTC m=+1144.405480322" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.543191 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1017-account-create-vg8zv" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.673940 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgd9v\" (UniqueName: \"kubernetes.io/projected/9e484202-a53f-45ea-a78e-a596ab07ff66-kube-api-access-fgd9v\") pod \"9e484202-a53f-45ea-a78e-a596ab07ff66\" (UID: \"9e484202-a53f-45ea-a78e-a596ab07ff66\") " Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.674118 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e484202-a53f-45ea-a78e-a596ab07ff66-operator-scripts\") pod \"9e484202-a53f-45ea-a78e-a596ab07ff66\" (UID: \"9e484202-a53f-45ea-a78e-a596ab07ff66\") " Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.674626 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e484202-a53f-45ea-a78e-a596ab07ff66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e484202-a53f-45ea-a78e-a596ab07ff66" (UID: "9e484202-a53f-45ea-a78e-a596ab07ff66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.674763 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e484202-a53f-45ea-a78e-a596ab07ff66-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.679697 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e484202-a53f-45ea-a78e-a596ab07ff66-kube-api-access-fgd9v" (OuterVolumeSpecName: "kube-api-access-fgd9v") pod "9e484202-a53f-45ea-a78e-a596ab07ff66" (UID: "9e484202-a53f-45ea-a78e-a596ab07ff66"). InnerVolumeSpecName "kube-api-access-fgd9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.692060 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e450-account-create-xv7lm" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.762579 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qkjqj" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.776511 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btx6d\" (UniqueName: \"kubernetes.io/projected/ad086a9d-061e-45a5-8364-758c44b03485-kube-api-access-btx6d\") pod \"ad086a9d-061e-45a5-8364-758c44b03485\" (UID: \"ad086a9d-061e-45a5-8364-758c44b03485\") " Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.776651 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad086a9d-061e-45a5-8364-758c44b03485-operator-scripts\") pod \"ad086a9d-061e-45a5-8364-758c44b03485\" (UID: \"ad086a9d-061e-45a5-8364-758c44b03485\") " Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.777171 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgd9v\" (UniqueName: \"kubernetes.io/projected/9e484202-a53f-45ea-a78e-a596ab07ff66-kube-api-access-fgd9v\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.777486 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad086a9d-061e-45a5-8364-758c44b03485-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad086a9d-061e-45a5-8364-758c44b03485" (UID: "ad086a9d-061e-45a5-8364-758c44b03485"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.779324 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xhrh9" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.784108 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad086a9d-061e-45a5-8364-758c44b03485-kube-api-access-btx6d" (OuterVolumeSpecName: "kube-api-access-btx6d") pod "ad086a9d-061e-45a5-8364-758c44b03485" (UID: "ad086a9d-061e-45a5-8364-758c44b03485"). InnerVolumeSpecName "kube-api-access-btx6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.792953 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6jf5" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.884555 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwpv2\" (UniqueName: \"kubernetes.io/projected/496251b9-2f65-457d-b68a-84d23bc3b05c-kube-api-access-nwpv2\") pod \"496251b9-2f65-457d-b68a-84d23bc3b05c\" (UID: \"496251b9-2f65-457d-b68a-84d23bc3b05c\") " Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.884646 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtbwk\" (UniqueName: \"kubernetes.io/projected/ec8d7749-fbf6-4898-bebf-8df0fe88d0fa-kube-api-access-xtbwk\") pod \"ec8d7749-fbf6-4898-bebf-8df0fe88d0fa\" (UID: \"ec8d7749-fbf6-4898-bebf-8df0fe88d0fa\") " Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.884684 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a284fb9e-518e-4ae6-b20b-8016ed5eef59-operator-scripts\") pod \"a284fb9e-518e-4ae6-b20b-8016ed5eef59\" (UID: \"a284fb9e-518e-4ae6-b20b-8016ed5eef59\") " Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.884729 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec8d7749-fbf6-4898-bebf-8df0fe88d0fa-operator-scripts\") pod \"ec8d7749-fbf6-4898-bebf-8df0fe88d0fa\" (UID: \"ec8d7749-fbf6-4898-bebf-8df0fe88d0fa\") " Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.884750 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/496251b9-2f65-457d-b68a-84d23bc3b05c-operator-scripts\") pod \"496251b9-2f65-457d-b68a-84d23bc3b05c\" (UID: \"496251b9-2f65-457d-b68a-84d23bc3b05c\") " Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.884783 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74tgg\" (UniqueName: \"kubernetes.io/projected/a284fb9e-518e-4ae6-b20b-8016ed5eef59-kube-api-access-74tgg\") pod \"a284fb9e-518e-4ae6-b20b-8016ed5eef59\" (UID: \"a284fb9e-518e-4ae6-b20b-8016ed5eef59\") " Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.888526 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a284fb9e-518e-4ae6-b20b-8016ed5eef59-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a284fb9e-518e-4ae6-b20b-8016ed5eef59" (UID: "a284fb9e-518e-4ae6-b20b-8016ed5eef59"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.889633 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btx6d\" (UniqueName: \"kubernetes.io/projected/ad086a9d-061e-45a5-8364-758c44b03485-kube-api-access-btx6d\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.889658 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a284fb9e-518e-4ae6-b20b-8016ed5eef59-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.889673 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad086a9d-061e-45a5-8364-758c44b03485-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.890328 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496251b9-2f65-457d-b68a-84d23bc3b05c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "496251b9-2f65-457d-b68a-84d23bc3b05c" (UID: "496251b9-2f65-457d-b68a-84d23bc3b05c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.890752 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec8d7749-fbf6-4898-bebf-8df0fe88d0fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec8d7749-fbf6-4898-bebf-8df0fe88d0fa" (UID: "ec8d7749-fbf6-4898-bebf-8df0fe88d0fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.900524 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8d7749-fbf6-4898-bebf-8df0fe88d0fa-kube-api-access-xtbwk" (OuterVolumeSpecName: "kube-api-access-xtbwk") pod "ec8d7749-fbf6-4898-bebf-8df0fe88d0fa" (UID: "ec8d7749-fbf6-4898-bebf-8df0fe88d0fa"). InnerVolumeSpecName "kube-api-access-xtbwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.903262 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496251b9-2f65-457d-b68a-84d23bc3b05c-kube-api-access-nwpv2" (OuterVolumeSpecName: "kube-api-access-nwpv2") pod "496251b9-2f65-457d-b68a-84d23bc3b05c" (UID: "496251b9-2f65-457d-b68a-84d23bc3b05c"). InnerVolumeSpecName "kube-api-access-nwpv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.913511 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a284fb9e-518e-4ae6-b20b-8016ed5eef59-kube-api-access-74tgg" (OuterVolumeSpecName: "kube-api-access-74tgg") pod "a284fb9e-518e-4ae6-b20b-8016ed5eef59" (UID: "a284fb9e-518e-4ae6-b20b-8016ed5eef59"). InnerVolumeSpecName "kube-api-access-74tgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.990813 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwpv2\" (UniqueName: \"kubernetes.io/projected/496251b9-2f65-457d-b68a-84d23bc3b05c-kube-api-access-nwpv2\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.990845 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtbwk\" (UniqueName: \"kubernetes.io/projected/ec8d7749-fbf6-4898-bebf-8df0fe88d0fa-kube-api-access-xtbwk\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.990857 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec8d7749-fbf6-4898-bebf-8df0fe88d0fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.990866 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/496251b9-2f65-457d-b68a-84d23bc3b05c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:48 crc kubenswrapper[4719]: I1124 09:12:48.990875 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74tgg\" (UniqueName: \"kubernetes.io/projected/a284fb9e-518e-4ae6-b20b-8016ed5eef59-kube-api-access-74tgg\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.078204 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xhrh9" event={"ID":"a284fb9e-518e-4ae6-b20b-8016ed5eef59","Type":"ContainerDied","Data":"86ffcd3691a09682628bd56ccdaacda73ac850ac99eba8d2e4ea531e86adef59"} Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.078251 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86ffcd3691a09682628bd56ccdaacda73ac850ac99eba8d2e4ea531e86adef59" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.078332 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xhrh9" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.080976 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qkjqj" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.081164 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qkjqj" event={"ID":"ec8d7749-fbf6-4898-bebf-8df0fe88d0fa","Type":"ContainerDied","Data":"aeefc604e3ee1b8c2a802225511745aab5d3a20daf07823be1e6cbde76a7f64c"} Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.081210 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeefc604e3ee1b8c2a802225511745aab5d3a20daf07823be1e6cbde76a7f64c" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.082880 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6jf5" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.082887 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6jf5" event={"ID":"496251b9-2f65-457d-b68a-84d23bc3b05c","Type":"ContainerDied","Data":"be32d6fa8d8ca9b47e80c30d9ebac813bca5eadd004d7a27b22c293c9cad9078"} Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.082921 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be32d6fa8d8ca9b47e80c30d9ebac813bca5eadd004d7a27b22c293c9cad9078" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.085731 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1017-account-create-vg8zv" event={"ID":"9e484202-a53f-45ea-a78e-a596ab07ff66","Type":"ContainerDied","Data":"c8a8c4750fdc0deda7c32e7e63bc646f5cabc2278cb256dc6317e852e64c8885"} Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.085769 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8a8c4750fdc0deda7c32e7e63bc646f5cabc2278cb256dc6317e852e64c8885" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.085826 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1017-account-create-vg8zv" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.096241 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e450-account-create-xv7lm" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.098115 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e450-account-create-xv7lm" event={"ID":"ad086a9d-061e-45a5-8364-758c44b03485","Type":"ContainerDied","Data":"ad8a71239c7c7b0ff59dfa5562d4e7ba107b8918e1c38e9e274176dc203f8782"} Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.098167 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad8a71239c7c7b0ff59dfa5562d4e7ba107b8918e1c38e9e274176dc203f8782" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.100001 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.375966 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c34a-account-create-z5z27" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.500131 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f766839a-efee-4eb9-bfa5-ba2d5329af55-operator-scripts\") pod \"f766839a-efee-4eb9-bfa5-ba2d5329af55\" (UID: \"f766839a-efee-4eb9-bfa5-ba2d5329af55\") " Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.500199 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cr2k\" (UniqueName: \"kubernetes.io/projected/f766839a-efee-4eb9-bfa5-ba2d5329af55-kube-api-access-9cr2k\") pod \"f766839a-efee-4eb9-bfa5-ba2d5329af55\" (UID: \"f766839a-efee-4eb9-bfa5-ba2d5329af55\") " Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.500652 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f766839a-efee-4eb9-bfa5-ba2d5329af55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f766839a-efee-4eb9-bfa5-ba2d5329af55" (UID: "f766839a-efee-4eb9-bfa5-ba2d5329af55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.504865 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f766839a-efee-4eb9-bfa5-ba2d5329af55-kube-api-access-9cr2k" (OuterVolumeSpecName: "kube-api-access-9cr2k") pod "f766839a-efee-4eb9-bfa5-ba2d5329af55" (UID: "f766839a-efee-4eb9-bfa5-ba2d5329af55"). InnerVolumeSpecName "kube-api-access-9cr2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.602159 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f766839a-efee-4eb9-bfa5-ba2d5329af55-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:49 crc kubenswrapper[4719]: I1124 09:12:49.602198 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cr2k\" (UniqueName: \"kubernetes.io/projected/f766839a-efee-4eb9-bfa5-ba2d5329af55-kube-api-access-9cr2k\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:50 crc kubenswrapper[4719]: I1124 09:12:50.104402 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c34a-account-create-z5z27" Nov 24 09:12:50 crc kubenswrapper[4719]: I1124 09:12:50.105223 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c34a-account-create-z5z27" event={"ID":"f766839a-efee-4eb9-bfa5-ba2d5329af55","Type":"ContainerDied","Data":"a87688ab3d7494e4d00dc390c35403878316f7d7c2587c8100abb8c3d38718fd"} Nov 24 09:12:50 crc kubenswrapper[4719]: I1124 09:12:50.105277 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a87688ab3d7494e4d00dc390c35403878316f7d7c2587c8100abb8c3d38718fd" Nov 24 09:12:50 crc kubenswrapper[4719]: I1124 09:12:50.414154 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:51 crc kubenswrapper[4719]: I1124 09:12:51.110927 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="ceilometer-central-agent" containerID="cri-o://7fc42c7c86a587a0ed0efe8aab8087fd330fd8c28158791cd55e6f654d7ef46b" gracePeriod=30 Nov 24 09:12:51 crc kubenswrapper[4719]: I1124 09:12:51.110972 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="sg-core" containerID="cri-o://d41c6cb18633057f7b541d63406248fc193811764cf93e42767185c63805fb47" gracePeriod=30 Nov 24 09:12:51 crc kubenswrapper[4719]: I1124 09:12:51.111021 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="proxy-httpd" containerID="cri-o://8fe57bd90a844df1d7e8fda78ba86b2d321c47102563cf30c66e09bad452eda1" gracePeriod=30 Nov 24 09:12:51 crc kubenswrapper[4719]: I1124 09:12:51.111001 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="ceilometer-notification-agent" containerID="cri-o://03545927ad1fd28a09891fa879432a2a0b76f00ebe7a275c8964293f20783476" gracePeriod=30 Nov 24 09:12:51 crc kubenswrapper[4719]: E1124 09:12:51.745508 4719 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3160535e_8087_4bbe_a69a_e586fa734825.slice/crio-conmon-03545927ad1fd28a09891fa879432a2a0b76f00ebe7a275c8964293f20783476.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3160535e_8087_4bbe_a69a_e586fa734825.slice/crio-03545927ad1fd28a09891fa879432a2a0b76f00ebe7a275c8964293f20783476.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3160535e_8087_4bbe_a69a_e586fa734825.slice/crio-conmon-7fc42c7c86a587a0ed0efe8aab8087fd330fd8c28158791cd55e6f654d7ef46b.scope\": RecentStats: unable to find data in memory cache]" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.123621 4719 generic.go:334] "Generic (PLEG): container finished" podID="3160535e-8087-4bbe-a69a-e586fa734825" containerID="8fe57bd90a844df1d7e8fda78ba86b2d321c47102563cf30c66e09bad452eda1" exitCode=0 Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.123655 4719 generic.go:334] "Generic (PLEG): container finished" podID="3160535e-8087-4bbe-a69a-e586fa734825" containerID="d41c6cb18633057f7b541d63406248fc193811764cf93e42767185c63805fb47" exitCode=2 Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.123682 4719 generic.go:334] "Generic (PLEG): container finished" podID="3160535e-8087-4bbe-a69a-e586fa734825" containerID="03545927ad1fd28a09891fa879432a2a0b76f00ebe7a275c8964293f20783476" exitCode=0 Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.123690 4719 generic.go:334] "Generic (PLEG): container finished" podID="3160535e-8087-4bbe-a69a-e586fa734825" containerID="7fc42c7c86a587a0ed0efe8aab8087fd330fd8c28158791cd55e6f654d7ef46b" exitCode=0 Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.123710 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3160535e-8087-4bbe-a69a-e586fa734825","Type":"ContainerDied","Data":"8fe57bd90a844df1d7e8fda78ba86b2d321c47102563cf30c66e09bad452eda1"} Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.123755 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3160535e-8087-4bbe-a69a-e586fa734825","Type":"ContainerDied","Data":"d41c6cb18633057f7b541d63406248fc193811764cf93e42767185c63805fb47"} Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.123772 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3160535e-8087-4bbe-a69a-e586fa734825","Type":"ContainerDied","Data":"03545927ad1fd28a09891fa879432a2a0b76f00ebe7a275c8964293f20783476"} Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.123783 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3160535e-8087-4bbe-a69a-e586fa734825","Type":"ContainerDied","Data":"7fc42c7c86a587a0ed0efe8aab8087fd330fd8c28158791cd55e6f654d7ef46b"} Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.123794 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3160535e-8087-4bbe-a69a-e586fa734825","Type":"ContainerDied","Data":"257c6cac3e0eed6cc2a0ce1020b1ebde7e935b6af193bb4b17d9f2a35b4b187f"} Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.123845 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="257c6cac3e0eed6cc2a0ce1020b1ebde7e935b6af193bb4b17d9f2a35b4b187f" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.128435 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.244005 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-scripts\") pod \"3160535e-8087-4bbe-a69a-e586fa734825\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.244103 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3160535e-8087-4bbe-a69a-e586fa734825-log-httpd\") pod \"3160535e-8087-4bbe-a69a-e586fa734825\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.244154 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-config-data\") pod \"3160535e-8087-4bbe-a69a-e586fa734825\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.244519 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3160535e-8087-4bbe-a69a-e586fa734825-run-httpd\") pod \"3160535e-8087-4bbe-a69a-e586fa734825\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.244542 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-sg-core-conf-yaml\") pod \"3160535e-8087-4bbe-a69a-e586fa734825\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.244635 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk8dh\" (UniqueName: \"kubernetes.io/projected/3160535e-8087-4bbe-a69a-e586fa734825-kube-api-access-gk8dh\") pod \"3160535e-8087-4bbe-a69a-e586fa734825\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.244690 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-combined-ca-bundle\") pod \"3160535e-8087-4bbe-a69a-e586fa734825\" (UID: \"3160535e-8087-4bbe-a69a-e586fa734825\") " Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.245055 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3160535e-8087-4bbe-a69a-e586fa734825-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3160535e-8087-4bbe-a69a-e586fa734825" (UID: "3160535e-8087-4bbe-a69a-e586fa734825"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.245273 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3160535e-8087-4bbe-a69a-e586fa734825-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3160535e-8087-4bbe-a69a-e586fa734825" (UID: "3160535e-8087-4bbe-a69a-e586fa734825"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.245625 4719 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3160535e-8087-4bbe-a69a-e586fa734825-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.245643 4719 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3160535e-8087-4bbe-a69a-e586fa734825-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.250700 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-scripts" (OuterVolumeSpecName: "scripts") pod "3160535e-8087-4bbe-a69a-e586fa734825" (UID: "3160535e-8087-4bbe-a69a-e586fa734825"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.251959 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3160535e-8087-4bbe-a69a-e586fa734825-kube-api-access-gk8dh" (OuterVolumeSpecName: "kube-api-access-gk8dh") pod "3160535e-8087-4bbe-a69a-e586fa734825" (UID: "3160535e-8087-4bbe-a69a-e586fa734825"). InnerVolumeSpecName "kube-api-access-gk8dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.276656 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3160535e-8087-4bbe-a69a-e586fa734825" (UID: "3160535e-8087-4bbe-a69a-e586fa734825"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.346850 4719 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.347120 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk8dh\" (UniqueName: \"kubernetes.io/projected/3160535e-8087-4bbe-a69a-e586fa734825-kube-api-access-gk8dh\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.347135 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.372129 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3160535e-8087-4bbe-a69a-e586fa734825" (UID: "3160535e-8087-4bbe-a69a-e586fa734825"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.372801 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-config-data" (OuterVolumeSpecName: "config-data") pod "3160535e-8087-4bbe-a69a-e586fa734825" (UID: "3160535e-8087-4bbe-a69a-e586fa734825"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.449217 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:52 crc kubenswrapper[4719]: I1124 09:12:52.449252 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3160535e-8087-4bbe-a69a-e586fa734825-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.131710 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.151980 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.158013 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.181645 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:53 crc kubenswrapper[4719]: E1124 09:12:53.181992 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad086a9d-061e-45a5-8364-758c44b03485" containerName="mariadb-account-create" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182009 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad086a9d-061e-45a5-8364-758c44b03485" containerName="mariadb-account-create" Nov 24 09:12:53 crc kubenswrapper[4719]: E1124 09:12:53.182020 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f766839a-efee-4eb9-bfa5-ba2d5329af55" containerName="mariadb-account-create" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182026 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f766839a-efee-4eb9-bfa5-ba2d5329af55" containerName="mariadb-account-create" Nov 24 09:12:53 crc kubenswrapper[4719]: E1124 09:12:53.182048 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e484202-a53f-45ea-a78e-a596ab07ff66" containerName="mariadb-account-create" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182054 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e484202-a53f-45ea-a78e-a596ab07ff66" containerName="mariadb-account-create" Nov 24 09:12:53 crc kubenswrapper[4719]: E1124 09:12:53.182067 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="ceilometer-central-agent" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182073 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="ceilometer-central-agent" Nov 24 09:12:53 crc kubenswrapper[4719]: E1124 09:12:53.182088 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="sg-core" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182094 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="sg-core" Nov 24 09:12:53 crc kubenswrapper[4719]: E1124 09:12:53.182103 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="proxy-httpd" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182109 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="proxy-httpd" Nov 24 09:12:53 crc kubenswrapper[4719]: E1124 09:12:53.182124 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="ceilometer-notification-agent" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182130 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="ceilometer-notification-agent" Nov 24 09:12:53 crc kubenswrapper[4719]: E1124 09:12:53.182142 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a284fb9e-518e-4ae6-b20b-8016ed5eef59" containerName="mariadb-database-create" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182148 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="a284fb9e-518e-4ae6-b20b-8016ed5eef59" containerName="mariadb-database-create" Nov 24 09:12:53 crc kubenswrapper[4719]: E1124 09:12:53.182160 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496251b9-2f65-457d-b68a-84d23bc3b05c" containerName="mariadb-database-create" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182166 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="496251b9-2f65-457d-b68a-84d23bc3b05c" containerName="mariadb-database-create" Nov 24 09:12:53 crc kubenswrapper[4719]: E1124 09:12:53.182178 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8d7749-fbf6-4898-bebf-8df0fe88d0fa" containerName="mariadb-database-create" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182184 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8d7749-fbf6-4898-bebf-8df0fe88d0fa" containerName="mariadb-database-create" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182342 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="sg-core" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182358 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="ceilometer-central-agent" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182380 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="ceilometer-notification-agent" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182393 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="a284fb9e-518e-4ae6-b20b-8016ed5eef59" containerName="mariadb-database-create" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182411 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e484202-a53f-45ea-a78e-a596ab07ff66" containerName="mariadb-account-create" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182424 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad086a9d-061e-45a5-8364-758c44b03485" containerName="mariadb-account-create" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182434 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="496251b9-2f65-457d-b68a-84d23bc3b05c" containerName="mariadb-database-create" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182443 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="3160535e-8087-4bbe-a69a-e586fa734825" containerName="proxy-httpd" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182454 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="f766839a-efee-4eb9-bfa5-ba2d5329af55" containerName="mariadb-account-create" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.182462 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec8d7749-fbf6-4898-bebf-8df0fe88d0fa" containerName="mariadb-database-create" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.185003 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.193754 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.193954 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.215607 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.268591 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.268679 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-run-httpd\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.268709 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrjg\" (UniqueName: \"kubernetes.io/projected/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-kube-api-access-lcrjg\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.268725 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.268745 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-log-httpd\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.268796 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-config-data\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.268810 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-scripts\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.370359 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-log-httpd\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.370438 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-config-data\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.370460 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-scripts\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.370533 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.370560 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-run-httpd\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.370592 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcrjg\" (UniqueName: \"kubernetes.io/projected/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-kube-api-access-lcrjg\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.370608 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.371045 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-log-httpd\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.371405 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-run-httpd\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.376865 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.377113 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-config-data\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.378248 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.380245 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-scripts\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.394861 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcrjg\" (UniqueName: \"kubernetes.io/projected/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-kube-api-access-lcrjg\") pod \"ceilometer-0\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " pod="openstack/ceilometer-0" Nov 24 09:12:53 crc kubenswrapper[4719]: I1124 09:12:53.501755 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:12:54 crc kubenswrapper[4719]: I1124 09:12:54.008901 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:54 crc kubenswrapper[4719]: W1124 09:12:54.015321 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5823e59_5e21_4c4a_86e8_56c6cbf682c5.slice/crio-2b0b306d89e0a9c2b4fd34f271d41caf707b6e68c7001b58e4bc6a2a73f5a667 WatchSource:0}: Error finding container 2b0b306d89e0a9c2b4fd34f271d41caf707b6e68c7001b58e4bc6a2a73f5a667: Status 404 returned error can't find the container with id 2b0b306d89e0a9c2b4fd34f271d41caf707b6e68c7001b58e4bc6a2a73f5a667 Nov 24 09:12:54 crc kubenswrapper[4719]: I1124 09:12:54.141504 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5823e59-5e21-4c4a-86e8-56c6cbf682c5","Type":"ContainerStarted","Data":"2b0b306d89e0a9c2b4fd34f271d41caf707b6e68c7001b58e4bc6a2a73f5a667"} Nov 24 09:12:54 crc kubenswrapper[4719]: I1124 09:12:54.345339 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:12:54 crc kubenswrapper[4719]: I1124 09:12:54.533250 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3160535e-8087-4bbe-a69a-e586fa734825" path="/var/lib/kubelet/pods/3160535e-8087-4bbe-a69a-e586fa734825/volumes" Nov 24 09:12:54 crc kubenswrapper[4719]: I1124 09:12:54.917556 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xxwbr"] Nov 24 09:12:54 crc kubenswrapper[4719]: I1124 09:12:54.918730 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:12:54 crc kubenswrapper[4719]: I1124 09:12:54.921554 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5c9tl" Nov 24 09:12:54 crc kubenswrapper[4719]: I1124 09:12:54.921616 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 09:12:54 crc kubenswrapper[4719]: I1124 09:12:54.921832 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 09:12:54 crc kubenswrapper[4719]: I1124 09:12:54.950245 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xxwbr"] Nov 24 09:12:54 crc kubenswrapper[4719]: I1124 09:12:54.996265 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-scripts\") pod \"nova-cell0-conductor-db-sync-xxwbr\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:12:54 crc kubenswrapper[4719]: I1124 09:12:54.996422 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b95kt\" (UniqueName: \"kubernetes.io/projected/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-kube-api-access-b95kt\") pod \"nova-cell0-conductor-db-sync-xxwbr\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:12:54 crc kubenswrapper[4719]: I1124 09:12:54.996476 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-config-data\") pod \"nova-cell0-conductor-db-sync-xxwbr\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:12:54 crc kubenswrapper[4719]: I1124 09:12:54.996592 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xxwbr\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:12:55 crc kubenswrapper[4719]: I1124 09:12:55.098441 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b95kt\" (UniqueName: \"kubernetes.io/projected/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-kube-api-access-b95kt\") pod \"nova-cell0-conductor-db-sync-xxwbr\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:12:55 crc kubenswrapper[4719]: I1124 09:12:55.098511 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-config-data\") pod \"nova-cell0-conductor-db-sync-xxwbr\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:12:55 crc kubenswrapper[4719]: I1124 09:12:55.098592 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xxwbr\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:12:55 crc kubenswrapper[4719]: I1124 09:12:55.098667 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-scripts\") pod \"nova-cell0-conductor-db-sync-xxwbr\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:12:55 crc kubenswrapper[4719]: I1124 09:12:55.102727 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-config-data\") pod \"nova-cell0-conductor-db-sync-xxwbr\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:12:55 crc kubenswrapper[4719]: I1124 09:12:55.103131 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-scripts\") pod \"nova-cell0-conductor-db-sync-xxwbr\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:12:55 crc kubenswrapper[4719]: I1124 09:12:55.105509 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xxwbr\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:12:55 crc kubenswrapper[4719]: I1124 09:12:55.117881 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b95kt\" (UniqueName: \"kubernetes.io/projected/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-kube-api-access-b95kt\") pod \"nova-cell0-conductor-db-sync-xxwbr\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:12:55 crc kubenswrapper[4719]: I1124 09:12:55.168405 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5823e59-5e21-4c4a-86e8-56c6cbf682c5","Type":"ContainerStarted","Data":"43ccf5524d7e95ba0b88dae843157210ed1de8516a42861ac9f404ca2e92913c"} Nov 24 09:12:55 crc kubenswrapper[4719]: I1124 09:12:55.243393 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:12:55 crc kubenswrapper[4719]: I1124 09:12:55.546700 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xxwbr"] Nov 24 09:12:55 crc kubenswrapper[4719]: W1124 09:12:55.552130 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1629c5d_5eb0_4e8a_9f5a_68b0ab618f30.slice/crio-21047c1dee26da477d089cbbd614bfb5ca912fdaa400a96674b35e612efcfd83 WatchSource:0}: Error finding container 21047c1dee26da477d089cbbd614bfb5ca912fdaa400a96674b35e612efcfd83: Status 404 returned error can't find the container with id 21047c1dee26da477d089cbbd614bfb5ca912fdaa400a96674b35e612efcfd83 Nov 24 09:12:56 crc kubenswrapper[4719]: I1124 09:12:56.177845 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xxwbr" event={"ID":"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30","Type":"ContainerStarted","Data":"21047c1dee26da477d089cbbd614bfb5ca912fdaa400a96674b35e612efcfd83"} Nov 24 09:12:56 crc kubenswrapper[4719]: I1124 09:12:56.181157 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5823e59-5e21-4c4a-86e8-56c6cbf682c5","Type":"ContainerStarted","Data":"4a74b6c3224a1c8755db7e61ec7f7400e12b6523e3d3440542736f24885c5f1a"} Nov 24 09:12:57 crc kubenswrapper[4719]: I1124 09:12:57.194168 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5823e59-5e21-4c4a-86e8-56c6cbf682c5","Type":"ContainerStarted","Data":"06a76cd33665dc1de7330f89339d40611cb47457fbc80adaebb6261138205524"} Nov 24 09:12:59 crc kubenswrapper[4719]: I1124 09:12:59.218089 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5823e59-5e21-4c4a-86e8-56c6cbf682c5","Type":"ContainerStarted","Data":"3895eda3d77175e85e7fe173af66ab74b6c726431afd9ec58aa14df5abef50c3"} Nov 24 09:12:59 crc kubenswrapper[4719]: I1124 09:12:59.218284 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="ceilometer-central-agent" containerID="cri-o://43ccf5524d7e95ba0b88dae843157210ed1de8516a42861ac9f404ca2e92913c" gracePeriod=30 Nov 24 09:12:59 crc kubenswrapper[4719]: I1124 09:12:59.218502 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:12:59 crc kubenswrapper[4719]: I1124 09:12:59.218534 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="proxy-httpd" containerID="cri-o://3895eda3d77175e85e7fe173af66ab74b6c726431afd9ec58aa14df5abef50c3" gracePeriod=30 Nov 24 09:12:59 crc kubenswrapper[4719]: I1124 09:12:59.218557 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="sg-core" containerID="cri-o://06a76cd33665dc1de7330f89339d40611cb47457fbc80adaebb6261138205524" gracePeriod=30 Nov 24 09:12:59 crc kubenswrapper[4719]: I1124 09:12:59.218546 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="ceilometer-notification-agent" containerID="cri-o://4a74b6c3224a1c8755db7e61ec7f7400e12b6523e3d3440542736f24885c5f1a" gracePeriod=30 Nov 24 09:12:59 crc kubenswrapper[4719]: I1124 09:12:59.253615 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.197568322 podStartE2EDuration="6.253592662s" podCreationTimestamp="2025-11-24 09:12:53 +0000 UTC" firstStartedPulling="2025-11-24 09:12:54.017765836 +0000 UTC m=+1150.349039088" lastFinishedPulling="2025-11-24 09:12:58.073790176 +0000 UTC m=+1154.405063428" observedRunningTime="2025-11-24 09:12:59.239517954 +0000 UTC m=+1155.570791206" watchObservedRunningTime="2025-11-24 09:12:59.253592662 +0000 UTC m=+1155.584865914" Nov 24 09:13:00 crc kubenswrapper[4719]: I1124 09:13:00.232729 4719 generic.go:334] "Generic (PLEG): container finished" podID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerID="3895eda3d77175e85e7fe173af66ab74b6c726431afd9ec58aa14df5abef50c3" exitCode=0 Nov 24 09:13:00 crc kubenswrapper[4719]: I1124 09:13:00.233025 4719 generic.go:334] "Generic (PLEG): container finished" podID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerID="06a76cd33665dc1de7330f89339d40611cb47457fbc80adaebb6261138205524" exitCode=2 Nov 24 09:13:00 crc kubenswrapper[4719]: I1124 09:13:00.232777 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5823e59-5e21-4c4a-86e8-56c6cbf682c5","Type":"ContainerDied","Data":"3895eda3d77175e85e7fe173af66ab74b6c726431afd9ec58aa14df5abef50c3"} Nov 24 09:13:00 crc kubenswrapper[4719]: I1124 09:13:00.233093 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5823e59-5e21-4c4a-86e8-56c6cbf682c5","Type":"ContainerDied","Data":"06a76cd33665dc1de7330f89339d40611cb47457fbc80adaebb6261138205524"} Nov 24 09:13:00 crc kubenswrapper[4719]: I1124 09:13:00.233109 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5823e59-5e21-4c4a-86e8-56c6cbf682c5","Type":"ContainerDied","Data":"4a74b6c3224a1c8755db7e61ec7f7400e12b6523e3d3440542736f24885c5f1a"} Nov 24 09:13:00 crc kubenswrapper[4719]: I1124 09:13:00.233053 4719 generic.go:334] "Generic (PLEG): container finished" podID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerID="4a74b6c3224a1c8755db7e61ec7f7400e12b6523e3d3440542736f24885c5f1a" exitCode=0 Nov 24 09:13:03 crc kubenswrapper[4719]: I1124 09:13:03.267299 4719 generic.go:334] "Generic (PLEG): container finished" podID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerID="43ccf5524d7e95ba0b88dae843157210ed1de8516a42861ac9f404ca2e92913c" exitCode=0 Nov 24 09:13:03 crc kubenswrapper[4719]: I1124 09:13:03.267413 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5823e59-5e21-4c4a-86e8-56c6cbf682c5","Type":"ContainerDied","Data":"43ccf5524d7e95ba0b88dae843157210ed1de8516a42861ac9f404ca2e92913c"} Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.080497 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.176183 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-sg-core-conf-yaml\") pod \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.176400 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-run-httpd\") pod \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.176474 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-config-data\") pod \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.176496 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-log-httpd\") pod \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.176554 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-combined-ca-bundle\") pod \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.176644 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcrjg\" (UniqueName: \"kubernetes.io/projected/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-kube-api-access-lcrjg\") pod \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.176707 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-scripts\") pod \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\" (UID: \"c5823e59-5e21-4c4a-86e8-56c6cbf682c5\") " Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.178279 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c5823e59-5e21-4c4a-86e8-56c6cbf682c5" (UID: "c5823e59-5e21-4c4a-86e8-56c6cbf682c5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.178356 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c5823e59-5e21-4c4a-86e8-56c6cbf682c5" (UID: "c5823e59-5e21-4c4a-86e8-56c6cbf682c5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.181831 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-scripts" (OuterVolumeSpecName: "scripts") pod "c5823e59-5e21-4c4a-86e8-56c6cbf682c5" (UID: "c5823e59-5e21-4c4a-86e8-56c6cbf682c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.182412 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-kube-api-access-lcrjg" (OuterVolumeSpecName: "kube-api-access-lcrjg") pod "c5823e59-5e21-4c4a-86e8-56c6cbf682c5" (UID: "c5823e59-5e21-4c4a-86e8-56c6cbf682c5"). InnerVolumeSpecName "kube-api-access-lcrjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.206889 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c5823e59-5e21-4c4a-86e8-56c6cbf682c5" (UID: "c5823e59-5e21-4c4a-86e8-56c6cbf682c5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.255200 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5823e59-5e21-4c4a-86e8-56c6cbf682c5" (UID: "c5823e59-5e21-4c4a-86e8-56c6cbf682c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.267532 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-config-data" (OuterVolumeSpecName: "config-data") pod "c5823e59-5e21-4c4a-86e8-56c6cbf682c5" (UID: "c5823e59-5e21-4c4a-86e8-56c6cbf682c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.278162 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xxwbr" event={"ID":"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30","Type":"ContainerStarted","Data":"eb3c20c894ab71f62034c6abc2ff661dfc401547e52546f4f66de536b992f090"} Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.278339 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.278359 4719 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.278374 4719 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.278385 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.278395 4719 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.278404 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.278415 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcrjg\" (UniqueName: \"kubernetes.io/projected/c5823e59-5e21-4c4a-86e8-56c6cbf682c5-kube-api-access-lcrjg\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.282873 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c5823e59-5e21-4c4a-86e8-56c6cbf682c5","Type":"ContainerDied","Data":"2b0b306d89e0a9c2b4fd34f271d41caf707b6e68c7001b58e4bc6a2a73f5a667"} Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.283083 4719 scope.go:117] "RemoveContainer" containerID="3895eda3d77175e85e7fe173af66ab74b6c726431afd9ec58aa14df5abef50c3" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.282926 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.301821 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xxwbr" podStartSLOduration=1.976826457 podStartE2EDuration="10.301799079s" podCreationTimestamp="2025-11-24 09:12:54 +0000 UTC" firstStartedPulling="2025-11-24 09:12:55.553818762 +0000 UTC m=+1151.885092014" lastFinishedPulling="2025-11-24 09:13:03.878791384 +0000 UTC m=+1160.210064636" observedRunningTime="2025-11-24 09:13:04.292289673 +0000 UTC m=+1160.623562945" watchObservedRunningTime="2025-11-24 09:13:04.301799079 +0000 UTC m=+1160.633072351" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.318179 4719 scope.go:117] "RemoveContainer" containerID="06a76cd33665dc1de7330f89339d40611cb47457fbc80adaebb6261138205524" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.326925 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.339123 4719 scope.go:117] "RemoveContainer" containerID="4a74b6c3224a1c8755db7e61ec7f7400e12b6523e3d3440542736f24885c5f1a" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.342158 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.359232 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:13:04 crc kubenswrapper[4719]: E1124 09:13:04.359569 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="ceilometer-central-agent" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.359584 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="ceilometer-central-agent" Nov 24 09:13:04 crc kubenswrapper[4719]: E1124 09:13:04.359599 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="proxy-httpd" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.359605 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="proxy-httpd" Nov 24 09:13:04 crc kubenswrapper[4719]: E1124 09:13:04.359619 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="sg-core" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.359626 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="sg-core" Nov 24 09:13:04 crc kubenswrapper[4719]: E1124 09:13:04.359676 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="ceilometer-notification-agent" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.359682 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="ceilometer-notification-agent" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.359845 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="sg-core" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.359857 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="ceilometer-notification-agent" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.359877 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="ceilometer-central-agent" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.359889 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" containerName="proxy-httpd" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.361291 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.368435 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.369562 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.369709 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.389405 4719 scope.go:117] "RemoveContainer" containerID="43ccf5524d7e95ba0b88dae843157210ed1de8516a42861ac9f404ca2e92913c" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.481300 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-log-httpd\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.481563 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.481719 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-run-httpd\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.482018 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4swq6\" (UniqueName: \"kubernetes.io/projected/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-kube-api-access-4swq6\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.482288 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.482331 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-config-data\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.482390 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-scripts\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.534742 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5823e59-5e21-4c4a-86e8-56c6cbf682c5" path="/var/lib/kubelet/pods/c5823e59-5e21-4c4a-86e8-56c6cbf682c5/volumes" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.562361 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.562624 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.583880 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-log-httpd\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.583948 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.584009 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-run-httpd\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.584054 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4swq6\" (UniqueName: \"kubernetes.io/projected/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-kube-api-access-4swq6\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.584070 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.584087 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-config-data\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.584108 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-scripts\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.584460 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-log-httpd\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.585001 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-run-httpd\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.588565 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.589763 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-config-data\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.591172 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-scripts\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.591484 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.601324 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4swq6\" (UniqueName: \"kubernetes.io/projected/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-kube-api-access-4swq6\") pod \"ceilometer-0\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " pod="openstack/ceilometer-0" Nov 24 09:13:04 crc kubenswrapper[4719]: I1124 09:13:04.692919 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:13:05 crc kubenswrapper[4719]: W1124 09:13:05.189473 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2410b12_1a4f_4e4f_afdf_0eb4f2c0025f.slice/crio-e8e4765dde27ef81b708bc21efa7d53b97a63ceb9ff85839278db16dadb941ca WatchSource:0}: Error finding container e8e4765dde27ef81b708bc21efa7d53b97a63ceb9ff85839278db16dadb941ca: Status 404 returned error can't find the container with id e8e4765dde27ef81b708bc21efa7d53b97a63ceb9ff85839278db16dadb941ca Nov 24 09:13:05 crc kubenswrapper[4719]: I1124 09:13:05.190588 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:13:05 crc kubenswrapper[4719]: I1124 09:13:05.291750 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f","Type":"ContainerStarted","Data":"e8e4765dde27ef81b708bc21efa7d53b97a63ceb9ff85839278db16dadb941ca"} Nov 24 09:13:06 crc kubenswrapper[4719]: I1124 09:13:06.305434 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f","Type":"ContainerStarted","Data":"ddc61b0499fb44990bf207bd19e04108fbe56761fa81f5bfc83556896e702c45"} Nov 24 09:13:07 crc kubenswrapper[4719]: I1124 09:13:07.314808 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f","Type":"ContainerStarted","Data":"632f7ae4368e0a768f48c2590cdf51816bd1b4508aae3b1194e453738ee56307"} Nov 24 09:13:08 crc kubenswrapper[4719]: I1124 09:13:08.328443 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f","Type":"ContainerStarted","Data":"48945a296e3177200cedd2747ab3e8a7580fe3c3187f4df4dfb7623e988d9730"} Nov 24 09:13:10 crc kubenswrapper[4719]: I1124 09:13:10.354648 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f","Type":"ContainerStarted","Data":"d0d859815d05ed30b4bb6727de2fe84138afc000dadc15f5dd2ce4ef370d1a25"} Nov 24 09:13:10 crc kubenswrapper[4719]: I1124 09:13:10.355165 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:13:10 crc kubenswrapper[4719]: I1124 09:13:10.385016 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.254915157 podStartE2EDuration="6.384984122s" podCreationTimestamp="2025-11-24 09:13:04 +0000 UTC" firstStartedPulling="2025-11-24 09:13:05.191839336 +0000 UTC m=+1161.523112588" lastFinishedPulling="2025-11-24 09:13:09.321908301 +0000 UTC m=+1165.653181553" observedRunningTime="2025-11-24 09:13:10.37868075 +0000 UTC m=+1166.709954072" watchObservedRunningTime="2025-11-24 09:13:10.384984122 +0000 UTC m=+1166.716257414" Nov 24 09:13:17 crc kubenswrapper[4719]: I1124 09:13:17.408922 4719 generic.go:334] "Generic (PLEG): container finished" podID="d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30" containerID="eb3c20c894ab71f62034c6abc2ff661dfc401547e52546f4f66de536b992f090" exitCode=0 Nov 24 09:13:17 crc kubenswrapper[4719]: I1124 09:13:17.408998 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xxwbr" event={"ID":"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30","Type":"ContainerDied","Data":"eb3c20c894ab71f62034c6abc2ff661dfc401547e52546f4f66de536b992f090"} Nov 24 09:13:18 crc kubenswrapper[4719]: I1124 09:13:18.781490 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:13:18 crc kubenswrapper[4719]: I1124 09:13:18.837803 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-config-data\") pod \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " Nov 24 09:13:18 crc kubenswrapper[4719]: I1124 09:13:18.838960 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-scripts\") pod \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " Nov 24 09:13:18 crc kubenswrapper[4719]: I1124 09:13:18.839069 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-combined-ca-bundle\") pod \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " Nov 24 09:13:18 crc kubenswrapper[4719]: I1124 09:13:18.839156 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b95kt\" (UniqueName: \"kubernetes.io/projected/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-kube-api-access-b95kt\") pod \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\" (UID: \"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30\") " Nov 24 09:13:18 crc kubenswrapper[4719]: I1124 09:13:18.845121 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-kube-api-access-b95kt" (OuterVolumeSpecName: "kube-api-access-b95kt") pod "d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30" (UID: "d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30"). InnerVolumeSpecName "kube-api-access-b95kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:13:18 crc kubenswrapper[4719]: I1124 09:13:18.851577 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-scripts" (OuterVolumeSpecName: "scripts") pod "d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30" (UID: "d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:18 crc kubenswrapper[4719]: I1124 09:13:18.865117 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30" (UID: "d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:18 crc kubenswrapper[4719]: I1124 09:13:18.879851 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-config-data" (OuterVolumeSpecName: "config-data") pod "d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30" (UID: "d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:18 crc kubenswrapper[4719]: I1124 09:13:18.942135 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:18 crc kubenswrapper[4719]: I1124 09:13:18.942170 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:18 crc kubenswrapper[4719]: I1124 09:13:18.942179 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:18 crc kubenswrapper[4719]: I1124 09:13:18.942191 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b95kt\" (UniqueName: \"kubernetes.io/projected/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30-kube-api-access-b95kt\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.430367 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xxwbr" event={"ID":"d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30","Type":"ContainerDied","Data":"21047c1dee26da477d089cbbd614bfb5ca912fdaa400a96674b35e612efcfd83"} Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.430407 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21047c1dee26da477d089cbbd614bfb5ca912fdaa400a96674b35e612efcfd83" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.430467 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xxwbr" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.590081 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 09:13:19 crc kubenswrapper[4719]: E1124 09:13:19.591455 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30" containerName="nova-cell0-conductor-db-sync" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.591540 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30" containerName="nova-cell0-conductor-db-sync" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.602045 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30" containerName="nova-cell0-conductor-db-sync" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.603016 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.603225 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.615712 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.619159 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5c9tl" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.671298 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb7c808-2485-4aba-acd2-2b509f4ed607-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7bb7c808-2485-4aba-acd2-2b509f4ed607\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.671357 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb7c808-2485-4aba-acd2-2b509f4ed607-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7bb7c808-2485-4aba-acd2-2b509f4ed607\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.671479 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nvbj\" (UniqueName: \"kubernetes.io/projected/7bb7c808-2485-4aba-acd2-2b509f4ed607-kube-api-access-2nvbj\") pod \"nova-cell0-conductor-0\" (UID: \"7bb7c808-2485-4aba-acd2-2b509f4ed607\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.773241 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb7c808-2485-4aba-acd2-2b509f4ed607-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7bb7c808-2485-4aba-acd2-2b509f4ed607\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.773295 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb7c808-2485-4aba-acd2-2b509f4ed607-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7bb7c808-2485-4aba-acd2-2b509f4ed607\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.773390 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nvbj\" (UniqueName: \"kubernetes.io/projected/7bb7c808-2485-4aba-acd2-2b509f4ed607-kube-api-access-2nvbj\") pod \"nova-cell0-conductor-0\" (UID: \"7bb7c808-2485-4aba-acd2-2b509f4ed607\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.776786 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb7c808-2485-4aba-acd2-2b509f4ed607-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7bb7c808-2485-4aba-acd2-2b509f4ed607\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.779776 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb7c808-2485-4aba-acd2-2b509f4ed607-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7bb7c808-2485-4aba-acd2-2b509f4ed607\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.800234 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nvbj\" (UniqueName: \"kubernetes.io/projected/7bb7c808-2485-4aba-acd2-2b509f4ed607-kube-api-access-2nvbj\") pod \"nova-cell0-conductor-0\" (UID: \"7bb7c808-2485-4aba-acd2-2b509f4ed607\") " pod="openstack/nova-cell0-conductor-0" Nov 24 09:13:19 crc kubenswrapper[4719]: I1124 09:13:19.934135 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 09:13:20 crc kubenswrapper[4719]: I1124 09:13:20.358975 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 09:13:20 crc kubenswrapper[4719]: W1124 09:13:20.364545 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bb7c808_2485_4aba_acd2_2b509f4ed607.slice/crio-103b1fce6ad21014751453a417b8e63629a016eda5f1f65e7303f1e3e651eebb WatchSource:0}: Error finding container 103b1fce6ad21014751453a417b8e63629a016eda5f1f65e7303f1e3e651eebb: Status 404 returned error can't find the container with id 103b1fce6ad21014751453a417b8e63629a016eda5f1f65e7303f1e3e651eebb Nov 24 09:13:20 crc kubenswrapper[4719]: I1124 09:13:20.441693 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7bb7c808-2485-4aba-acd2-2b509f4ed607","Type":"ContainerStarted","Data":"103b1fce6ad21014751453a417b8e63629a016eda5f1f65e7303f1e3e651eebb"} Nov 24 09:13:21 crc kubenswrapper[4719]: I1124 09:13:21.452175 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7bb7c808-2485-4aba-acd2-2b509f4ed607","Type":"ContainerStarted","Data":"71f896349a1fbb3d944dafbd08b03910d32ef70e2af6b50e293f49ca5612cfbc"} Nov 24 09:13:21 crc kubenswrapper[4719]: I1124 09:13:21.452457 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 09:13:21 crc kubenswrapper[4719]: I1124 09:13:21.474729 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.474710087 podStartE2EDuration="2.474710087s" podCreationTimestamp="2025-11-24 09:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:13:21.468812867 +0000 UTC m=+1177.800086139" watchObservedRunningTime="2025-11-24 09:13:21.474710087 +0000 UTC m=+1177.805983339" Nov 24 09:13:29 crc kubenswrapper[4719]: I1124 09:13:29.966928 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.467192 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4vlpd"] Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.468783 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.471781 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.472000 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.479816 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4vlpd"] Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.565919 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4vlpd\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.566076 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxx5b\" (UniqueName: \"kubernetes.io/projected/81328d33-7af2-4d4b-9f81-033c996a7d36-kube-api-access-hxx5b\") pod \"nova-cell0-cell-mapping-4vlpd\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.566139 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-config-data\") pod \"nova-cell0-cell-mapping-4vlpd\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.566191 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-scripts\") pod \"nova-cell0-cell-mapping-4vlpd\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.628125 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.635748 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.637708 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.662094 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.668590 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-config-data\") pod \"nova-cell0-cell-mapping-4vlpd\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.668671 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40976df7-3ef1-4b17-96d1-d259648b046c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40976df7-3ef1-4b17-96d1-d259648b046c\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.668695 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-scripts\") pod \"nova-cell0-cell-mapping-4vlpd\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.668729 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4vlpd\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.668810 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7x26\" (UniqueName: \"kubernetes.io/projected/40976df7-3ef1-4b17-96d1-d259648b046c-kube-api-access-w7x26\") pod \"nova-scheduler-0\" (UID: \"40976df7-3ef1-4b17-96d1-d259648b046c\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.668965 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxx5b\" (UniqueName: \"kubernetes.io/projected/81328d33-7af2-4d4b-9f81-033c996a7d36-kube-api-access-hxx5b\") pod \"nova-cell0-cell-mapping-4vlpd\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.669010 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40976df7-3ef1-4b17-96d1-d259648b046c-config-data\") pod \"nova-scheduler-0\" (UID: \"40976df7-3ef1-4b17-96d1-d259648b046c\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.676205 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-scripts\") pod \"nova-cell0-cell-mapping-4vlpd\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.698938 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4vlpd\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.699782 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-config-data\") pod \"nova-cell0-cell-mapping-4vlpd\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.740398 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.742246 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.745335 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.754700 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxx5b\" (UniqueName: \"kubernetes.io/projected/81328d33-7af2-4d4b-9f81-033c996a7d36-kube-api-access-hxx5b\") pod \"nova-cell0-cell-mapping-4vlpd\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.772985 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40976df7-3ef1-4b17-96d1-d259648b046c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40976df7-3ef1-4b17-96d1-d259648b046c\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.773258 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7x26\" (UniqueName: \"kubernetes.io/projected/40976df7-3ef1-4b17-96d1-d259648b046c-kube-api-access-w7x26\") pod \"nova-scheduler-0\" (UID: \"40976df7-3ef1-4b17-96d1-d259648b046c\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.773369 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40976df7-3ef1-4b17-96d1-d259648b046c-config-data\") pod \"nova-scheduler-0\" (UID: \"40976df7-3ef1-4b17-96d1-d259648b046c\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.779576 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40976df7-3ef1-4b17-96d1-d259648b046c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40976df7-3ef1-4b17-96d1-d259648b046c\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.792095 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.799281 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.806285 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7x26\" (UniqueName: \"kubernetes.io/projected/40976df7-3ef1-4b17-96d1-d259648b046c-kube-api-access-w7x26\") pod \"nova-scheduler-0\" (UID: \"40976df7-3ef1-4b17-96d1-d259648b046c\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.821224 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40976df7-3ef1-4b17-96d1-d259648b046c-config-data\") pod \"nova-scheduler-0\" (UID: \"40976df7-3ef1-4b17-96d1-d259648b046c\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.862177 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.869061 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.874906 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.875282 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91c47491-6b0b-466c-adb6-90f4a4ca9728-config-data\") pod \"nova-metadata-0\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " pod="openstack/nova-metadata-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.876242 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9n2f\" (UniqueName: \"kubernetes.io/projected/91c47491-6b0b-466c-adb6-90f4a4ca9728-kube-api-access-p9n2f\") pod \"nova-metadata-0\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " pod="openstack/nova-metadata-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.876440 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c47491-6b0b-466c-adb6-90f4a4ca9728-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " pod="openstack/nova-metadata-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.876644 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91c47491-6b0b-466c-adb6-90f4a4ca9728-logs\") pod \"nova-metadata-0\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " pod="openstack/nova-metadata-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.913403 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.951521 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.983396 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9n2f\" (UniqueName: \"kubernetes.io/projected/91c47491-6b0b-466c-adb6-90f4a4ca9728-kube-api-access-p9n2f\") pod \"nova-metadata-0\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " pod="openstack/nova-metadata-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.983467 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c47491-6b0b-466c-adb6-90f4a4ca9728-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " pod="openstack/nova-metadata-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.983531 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91c47491-6b0b-466c-adb6-90f4a4ca9728-logs\") pod \"nova-metadata-0\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " pod="openstack/nova-metadata-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.983564 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xk9r\" (UniqueName: \"kubernetes.io/projected/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-kube-api-access-9xk9r\") pod \"nova-api-0\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " pod="openstack/nova-api-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.983586 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-config-data\") pod \"nova-api-0\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " pod="openstack/nova-api-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.983618 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " pod="openstack/nova-api-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.983636 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-logs\") pod \"nova-api-0\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " pod="openstack/nova-api-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.983653 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91c47491-6b0b-466c-adb6-90f4a4ca9728-config-data\") pod \"nova-metadata-0\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " pod="openstack/nova-metadata-0" Nov 24 09:13:30 crc kubenswrapper[4719]: I1124 09:13:30.985983 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91c47491-6b0b-466c-adb6-90f4a4ca9728-logs\") pod \"nova-metadata-0\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " pod="openstack/nova-metadata-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.000897 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c47491-6b0b-466c-adb6-90f4a4ca9728-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " pod="openstack/nova-metadata-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.002244 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91c47491-6b0b-466c-adb6-90f4a4ca9728-config-data\") pod \"nova-metadata-0\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " pod="openstack/nova-metadata-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.007370 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-f6wxl"] Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.008779 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.044309 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9n2f\" (UniqueName: \"kubernetes.io/projected/91c47491-6b0b-466c-adb6-90f4a4ca9728-kube-api-access-p9n2f\") pod \"nova-metadata-0\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " pod="openstack/nova-metadata-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.060063 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-f6wxl"] Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.084746 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.085258 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.085322 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.085402 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-config\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.085461 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xk9r\" (UniqueName: \"kubernetes.io/projected/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-kube-api-access-9xk9r\") pod \"nova-api-0\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " pod="openstack/nova-api-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.085482 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.085503 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-config-data\") pod \"nova-api-0\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " pod="openstack/nova-api-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.085540 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " pod="openstack/nova-api-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.085558 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-logs\") pod \"nova-api-0\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " pod="openstack/nova-api-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.085578 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f52sb\" (UniqueName: \"kubernetes.io/projected/6c4416ac-0d2d-4fce-a5cf-51baceca7650-kube-api-access-f52sb\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.092072 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-logs\") pod \"nova-api-0\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " pod="openstack/nova-api-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.098927 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.108729 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-config-data\") pod \"nova-api-0\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " pod="openstack/nova-api-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.109095 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.121189 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.124835 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " pod="openstack/nova-api-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.142738 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xk9r\" (UniqueName: \"kubernetes.io/projected/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-kube-api-access-9xk9r\") pod \"nova-api-0\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " pod="openstack/nova-api-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.193183 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-config\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.193455 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.193567 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc8m2\" (UniqueName: \"kubernetes.io/projected/16402f86-bc6a-4127-8e64-e9eb25435527-kube-api-access-kc8m2\") pod \"nova-cell1-novncproxy-0\" (UID: \"16402f86-bc6a-4127-8e64-e9eb25435527\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.193663 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16402f86-bc6a-4127-8e64-e9eb25435527-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16402f86-bc6a-4127-8e64-e9eb25435527\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.193760 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f52sb\" (UniqueName: \"kubernetes.io/projected/6c4416ac-0d2d-4fce-a5cf-51baceca7650-kube-api-access-f52sb\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.193860 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.193979 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.194107 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16402f86-bc6a-4127-8e64-e9eb25435527-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16402f86-bc6a-4127-8e64-e9eb25435527\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.195086 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-config\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.195286 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.195852 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.196170 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.226185 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.226851 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f52sb\" (UniqueName: \"kubernetes.io/projected/6c4416ac-0d2d-4fce-a5cf-51baceca7650-kube-api-access-f52sb\") pod \"dnsmasq-dns-8b8cf6657-f6wxl\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.253740 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.296758 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16402f86-bc6a-4127-8e64-e9eb25435527-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16402f86-bc6a-4127-8e64-e9eb25435527\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.296886 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc8m2\" (UniqueName: \"kubernetes.io/projected/16402f86-bc6a-4127-8e64-e9eb25435527-kube-api-access-kc8m2\") pod \"nova-cell1-novncproxy-0\" (UID: \"16402f86-bc6a-4127-8e64-e9eb25435527\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.296922 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16402f86-bc6a-4127-8e64-e9eb25435527-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16402f86-bc6a-4127-8e64-e9eb25435527\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.305558 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16402f86-bc6a-4127-8e64-e9eb25435527-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"16402f86-bc6a-4127-8e64-e9eb25435527\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.305636 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16402f86-bc6a-4127-8e64-e9eb25435527-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"16402f86-bc6a-4127-8e64-e9eb25435527\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.326020 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc8m2\" (UniqueName: \"kubernetes.io/projected/16402f86-bc6a-4127-8e64-e9eb25435527-kube-api-access-kc8m2\") pod \"nova-cell1-novncproxy-0\" (UID: \"16402f86-bc6a-4127-8e64-e9eb25435527\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.380313 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:31 crc kubenswrapper[4719]: I1124 09:13:31.463891 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.005615 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.058287 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4vlpd"] Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.157960 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:13:32 crc kubenswrapper[4719]: W1124 09:13:32.160875 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40976df7_3ef1_4b17_96d1_d259648b046c.slice/crio-d244215fccc3c0c98861d61566ec3e2cb757c731969481453d6895f1d4b3f321 WatchSource:0}: Error finding container d244215fccc3c0c98861d61566ec3e2cb757c731969481453d6895f1d4b3f321: Status 404 returned error can't find the container with id d244215fccc3c0c98861d61566ec3e2cb757c731969481453d6895f1d4b3f321 Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.238902 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.341764 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.374902 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-f6wxl"] Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.414544 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x8mh9"] Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.415701 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.420663 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.421007 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.424794 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x8mh9"] Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.536890 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x8mh9\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.536953 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-scripts\") pod \"nova-cell1-conductor-db-sync-x8mh9\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.537304 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwlqd\" (UniqueName: \"kubernetes.io/projected/28468fda-a274-493f-8a27-3aa221c5c8db-kube-api-access-lwlqd\") pod \"nova-cell1-conductor-db-sync-x8mh9\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.537330 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-config-data\") pod \"nova-cell1-conductor-db-sync-x8mh9\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.561941 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40976df7-3ef1-4b17-96d1-d259648b046c","Type":"ContainerStarted","Data":"d244215fccc3c0c98861d61566ec3e2cb757c731969481453d6895f1d4b3f321"} Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.562642 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" event={"ID":"6c4416ac-0d2d-4fce-a5cf-51baceca7650","Type":"ContainerStarted","Data":"09da56720568784089ce0a9e2c9ee4eb5f1197acea99292fff6674f0cdffe73e"} Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.564986 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4vlpd" event={"ID":"81328d33-7af2-4d4b-9f81-033c996a7d36","Type":"ContainerStarted","Data":"aa985b55b297acae8a118bf6107d9e386b0a250b74e57b331d34f6d884080499"} Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.565022 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4vlpd" event={"ID":"81328d33-7af2-4d4b-9f81-033c996a7d36","Type":"ContainerStarted","Data":"ae28d02ff711dfdc4ac3f230e1099ee6e2363895e53f9013b56caf70bb971979"} Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.569935 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"16402f86-bc6a-4127-8e64-e9eb25435527","Type":"ContainerStarted","Data":"38646ebbb9bf4318010fe4d64079739c44cea8a53d852b5ec6375db67aabc36c"} Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.577632 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f","Type":"ContainerStarted","Data":"abe0a9a5fbc8c9145449d1b8b6e9dd0d9149ed49cee6218a22af0c80cb190d3d"} Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.585811 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91c47491-6b0b-466c-adb6-90f4a4ca9728","Type":"ContainerStarted","Data":"a8336da7f5ef5700f80660ac954279eff470c90675036215a77ef1e7ca84377e"} Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.598537 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4vlpd" podStartSLOduration=2.598514755 podStartE2EDuration="2.598514755s" podCreationTimestamp="2025-11-24 09:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:13:32.581858335 +0000 UTC m=+1188.913131587" watchObservedRunningTime="2025-11-24 09:13:32.598514755 +0000 UTC m=+1188.929788017" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.639093 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-scripts\") pod \"nova-cell1-conductor-db-sync-x8mh9\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.639474 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwlqd\" (UniqueName: \"kubernetes.io/projected/28468fda-a274-493f-8a27-3aa221c5c8db-kube-api-access-lwlqd\") pod \"nova-cell1-conductor-db-sync-x8mh9\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.639496 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-config-data\") pod \"nova-cell1-conductor-db-sync-x8mh9\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.639563 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x8mh9\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.651056 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-scripts\") pod \"nova-cell1-conductor-db-sync-x8mh9\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.652063 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-x8mh9\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.657876 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-config-data\") pod \"nova-cell1-conductor-db-sync-x8mh9\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.662795 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwlqd\" (UniqueName: \"kubernetes.io/projected/28468fda-a274-493f-8a27-3aa221c5c8db-kube-api-access-lwlqd\") pod \"nova-cell1-conductor-db-sync-x8mh9\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:32 crc kubenswrapper[4719]: I1124 09:13:32.741753 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:32 crc kubenswrapper[4719]: E1124 09:13:32.760887 4719 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c4416ac_0d2d_4fce_a5cf_51baceca7650.slice/crio-conmon-4460cbbcb82aa131b39556f155b303a05a1ebc028f988d2e9d93ec316163f997.scope\": RecentStats: unable to find data in memory cache]" Nov 24 09:13:33 crc kubenswrapper[4719]: W1124 09:13:33.301515 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28468fda_a274_493f_8a27_3aa221c5c8db.slice/crio-1f2cb26c98b5b2c62df52bf5a1254634b3145573f3b98e09b34b41fad130bdfd WatchSource:0}: Error finding container 1f2cb26c98b5b2c62df52bf5a1254634b3145573f3b98e09b34b41fad130bdfd: Status 404 returned error can't find the container with id 1f2cb26c98b5b2c62df52bf5a1254634b3145573f3b98e09b34b41fad130bdfd Nov 24 09:13:33 crc kubenswrapper[4719]: I1124 09:13:33.311885 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x8mh9"] Nov 24 09:13:33 crc kubenswrapper[4719]: I1124 09:13:33.622736 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x8mh9" event={"ID":"28468fda-a274-493f-8a27-3aa221c5c8db","Type":"ContainerStarted","Data":"c22ed4ad0ba88baa4c6dd82e8fb8c82fda65ce23848f984ff2c624d6ec0cf5d5"} Nov 24 09:13:33 crc kubenswrapper[4719]: I1124 09:13:33.622803 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x8mh9" event={"ID":"28468fda-a274-493f-8a27-3aa221c5c8db","Type":"ContainerStarted","Data":"1f2cb26c98b5b2c62df52bf5a1254634b3145573f3b98e09b34b41fad130bdfd"} Nov 24 09:13:33 crc kubenswrapper[4719]: I1124 09:13:33.627024 4719 generic.go:334] "Generic (PLEG): container finished" podID="6c4416ac-0d2d-4fce-a5cf-51baceca7650" containerID="4460cbbcb82aa131b39556f155b303a05a1ebc028f988d2e9d93ec316163f997" exitCode=0 Nov 24 09:13:33 crc kubenswrapper[4719]: I1124 09:13:33.627149 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" event={"ID":"6c4416ac-0d2d-4fce-a5cf-51baceca7650","Type":"ContainerDied","Data":"4460cbbcb82aa131b39556f155b303a05a1ebc028f988d2e9d93ec316163f997"} Nov 24 09:13:33 crc kubenswrapper[4719]: I1124 09:13:33.730810 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-x8mh9" podStartSLOduration=1.730791551 podStartE2EDuration="1.730791551s" podCreationTimestamp="2025-11-24 09:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:13:33.666297953 +0000 UTC m=+1189.997571205" watchObservedRunningTime="2025-11-24 09:13:33.730791551 +0000 UTC m=+1190.062064803" Nov 24 09:13:34 crc kubenswrapper[4719]: I1124 09:13:34.562079 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:13:34 crc kubenswrapper[4719]: I1124 09:13:34.562381 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:13:34 crc kubenswrapper[4719]: I1124 09:13:34.562432 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 09:13:34 crc kubenswrapper[4719]: I1124 09:13:34.563194 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"abd7ce8489d65ccef4f15a6a456d72d66be28ce94d53032a08cda3487cfa7499"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:13:34 crc kubenswrapper[4719]: I1124 09:13:34.563244 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://abd7ce8489d65ccef4f15a6a456d72d66be28ce94d53032a08cda3487cfa7499" gracePeriod=600 Nov 24 09:13:34 crc kubenswrapper[4719]: I1124 09:13:34.643875 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" event={"ID":"6c4416ac-0d2d-4fce-a5cf-51baceca7650","Type":"ContainerStarted","Data":"e0f48ccc7778a8699ff51bd7929a5ae2f981357ea767120a7189d7835d83c230"} Nov 24 09:13:34 crc kubenswrapper[4719]: I1124 09:13:34.643947 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:34 crc kubenswrapper[4719]: I1124 09:13:34.677851 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" podStartSLOduration=4.67782998 podStartE2EDuration="4.67782998s" podCreationTimestamp="2025-11-24 09:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:13:34.663895209 +0000 UTC m=+1190.995168481" watchObservedRunningTime="2025-11-24 09:13:34.67782998 +0000 UTC m=+1191.009103232" Nov 24 09:13:34 crc kubenswrapper[4719]: I1124 09:13:34.722517 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 09:13:34 crc kubenswrapper[4719]: I1124 09:13:34.979708 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:13:35 crc kubenswrapper[4719]: I1124 09:13:35.003174 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:13:35 crc kubenswrapper[4719]: I1124 09:13:35.655986 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="abd7ce8489d65ccef4f15a6a456d72d66be28ce94d53032a08cda3487cfa7499" exitCode=0 Nov 24 09:13:35 crc kubenswrapper[4719]: I1124 09:13:35.656172 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"abd7ce8489d65ccef4f15a6a456d72d66be28ce94d53032a08cda3487cfa7499"} Nov 24 09:13:35 crc kubenswrapper[4719]: I1124 09:13:35.657612 4719 scope.go:117] "RemoveContainer" containerID="c4aeeb69c1ab7122cad95da513920656c5e4ba5b3dd78419e124282e98483b06" Nov 24 09:13:37 crc kubenswrapper[4719]: I1124 09:13:37.686110 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91c47491-6b0b-466c-adb6-90f4a4ca9728","Type":"ContainerStarted","Data":"43fea711240764429e6a9ab28d7fd1e0e45e9905e5cf1936db6b6da1e2276717"} Nov 24 09:13:37 crc kubenswrapper[4719]: I1124 09:13:37.686606 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91c47491-6b0b-466c-adb6-90f4a4ca9728","Type":"ContainerStarted","Data":"20fa245275e86ed49762075c7b334308c43a4732d262cd900534d4cd9c19a39f"} Nov 24 09:13:37 crc kubenswrapper[4719]: I1124 09:13:37.686432 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="91c47491-6b0b-466c-adb6-90f4a4ca9728" containerName="nova-metadata-metadata" containerID="cri-o://43fea711240764429e6a9ab28d7fd1e0e45e9905e5cf1936db6b6da1e2276717" gracePeriod=30 Nov 24 09:13:37 crc kubenswrapper[4719]: I1124 09:13:37.686150 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="91c47491-6b0b-466c-adb6-90f4a4ca9728" containerName="nova-metadata-log" containerID="cri-o://20fa245275e86ed49762075c7b334308c43a4732d262cd900534d4cd9c19a39f" gracePeriod=30 Nov 24 09:13:37 crc kubenswrapper[4719]: I1124 09:13:37.690840 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"b04d639d9aa1ad87769535c446009de2717540d226e0b11055a32fbdd9893eb6"} Nov 24 09:13:37 crc kubenswrapper[4719]: I1124 09:13:37.695890 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40976df7-3ef1-4b17-96d1-d259648b046c","Type":"ContainerStarted","Data":"de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38"} Nov 24 09:13:37 crc kubenswrapper[4719]: I1124 09:13:37.697855 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"16402f86-bc6a-4127-8e64-e9eb25435527","Type":"ContainerStarted","Data":"a0f9aa744fb58ae041a6606547c8f734361f5a55553664878402c58e447a82d5"} Nov 24 09:13:37 crc kubenswrapper[4719]: I1124 09:13:37.697967 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="16402f86-bc6a-4127-8e64-e9eb25435527" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a0f9aa744fb58ae041a6606547c8f734361f5a55553664878402c58e447a82d5" gracePeriod=30 Nov 24 09:13:37 crc kubenswrapper[4719]: I1124 09:13:37.703023 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f","Type":"ContainerStarted","Data":"304d78b33ed8a8f9b62e0d8122519025d605cc8295155c2b1cf2f165c29cf6c5"} Nov 24 09:13:37 crc kubenswrapper[4719]: I1124 09:13:37.703077 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f","Type":"ContainerStarted","Data":"1472117c95c8ec256fdb955907a7a77c26f6c827adb42c92e66cb362dafddddf"} Nov 24 09:13:37 crc kubenswrapper[4719]: I1124 09:13:37.734133 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.429534973 podStartE2EDuration="7.734115169s" podCreationTimestamp="2025-11-24 09:13:30 +0000 UTC" firstStartedPulling="2025-11-24 09:13:32.16405131 +0000 UTC m=+1188.495324562" lastFinishedPulling="2025-11-24 09:13:36.468631506 +0000 UTC m=+1192.799904758" observedRunningTime="2025-11-24 09:13:37.732723999 +0000 UTC m=+1194.063997251" watchObservedRunningTime="2025-11-24 09:13:37.734115169 +0000 UTC m=+1194.065388421" Nov 24 09:13:37 crc kubenswrapper[4719]: I1124 09:13:37.734877 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.806353962 podStartE2EDuration="7.734871761s" podCreationTimestamp="2025-11-24 09:13:30 +0000 UTC" firstStartedPulling="2025-11-24 09:13:32.040208432 +0000 UTC m=+1188.371481694" lastFinishedPulling="2025-11-24 09:13:36.968726241 +0000 UTC m=+1193.299999493" observedRunningTime="2025-11-24 09:13:37.715425741 +0000 UTC m=+1194.046698993" watchObservedRunningTime="2025-11-24 09:13:37.734871761 +0000 UTC m=+1194.066145013" Nov 24 09:13:37 crc kubenswrapper[4719]: I1124 09:13:37.753920 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.524980053 podStartE2EDuration="7.753899189s" podCreationTimestamp="2025-11-24 09:13:30 +0000 UTC" firstStartedPulling="2025-11-24 09:13:32.355051012 +0000 UTC m=+1188.686324264" lastFinishedPulling="2025-11-24 09:13:36.583970148 +0000 UTC m=+1192.915243400" observedRunningTime="2025-11-24 09:13:37.750340896 +0000 UTC m=+1194.081614158" watchObservedRunningTime="2025-11-24 09:13:37.753899189 +0000 UTC m=+1194.085172451" Nov 24 09:13:37 crc kubenswrapper[4719]: I1124 09:13:37.787109 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.108854576 podStartE2EDuration="7.787093505s" podCreationTimestamp="2025-11-24 09:13:30 +0000 UTC" firstStartedPulling="2025-11-24 09:13:32.271604338 +0000 UTC m=+1188.602877590" lastFinishedPulling="2025-11-24 09:13:36.949843267 +0000 UTC m=+1193.281116519" observedRunningTime="2025-11-24 09:13:37.784811439 +0000 UTC m=+1194.116084701" watchObservedRunningTime="2025-11-24 09:13:37.787093505 +0000 UTC m=+1194.118366757" Nov 24 09:13:38 crc kubenswrapper[4719]: I1124 09:13:38.713609 4719 generic.go:334] "Generic (PLEG): container finished" podID="91c47491-6b0b-466c-adb6-90f4a4ca9728" containerID="20fa245275e86ed49762075c7b334308c43a4732d262cd900534d4cd9c19a39f" exitCode=143 Nov 24 09:13:38 crc kubenswrapper[4719]: I1124 09:13:38.715777 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91c47491-6b0b-466c-adb6-90f4a4ca9728","Type":"ContainerDied","Data":"20fa245275e86ed49762075c7b334308c43a4732d262cd900534d4cd9c19a39f"} Nov 24 09:13:38 crc kubenswrapper[4719]: I1124 09:13:38.820006 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:13:38 crc kubenswrapper[4719]: I1124 09:13:38.820264 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="bbdd37f7-5b28-4ecb-96ad-b2c7986016e4" containerName="kube-state-metrics" containerID="cri-o://8ccffb921f05247a14c20f8932c7bdebe8060884c1729911ee8f402025c1d9dd" gracePeriod=30 Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.409307 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.480225 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm2qj\" (UniqueName: \"kubernetes.io/projected/bbdd37f7-5b28-4ecb-96ad-b2c7986016e4-kube-api-access-dm2qj\") pod \"bbdd37f7-5b28-4ecb-96ad-b2c7986016e4\" (UID: \"bbdd37f7-5b28-4ecb-96ad-b2c7986016e4\") " Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.502436 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbdd37f7-5b28-4ecb-96ad-b2c7986016e4-kube-api-access-dm2qj" (OuterVolumeSpecName: "kube-api-access-dm2qj") pod "bbdd37f7-5b28-4ecb-96ad-b2c7986016e4" (UID: "bbdd37f7-5b28-4ecb-96ad-b2c7986016e4"). InnerVolumeSpecName "kube-api-access-dm2qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.583076 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm2qj\" (UniqueName: \"kubernetes.io/projected/bbdd37f7-5b28-4ecb-96ad-b2c7986016e4-kube-api-access-dm2qj\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.723863 4719 generic.go:334] "Generic (PLEG): container finished" podID="bbdd37f7-5b28-4ecb-96ad-b2c7986016e4" containerID="8ccffb921f05247a14c20f8932c7bdebe8060884c1729911ee8f402025c1d9dd" exitCode=2 Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.724015 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bbdd37f7-5b28-4ecb-96ad-b2c7986016e4","Type":"ContainerDied","Data":"8ccffb921f05247a14c20f8932c7bdebe8060884c1729911ee8f402025c1d9dd"} Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.724193 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bbdd37f7-5b28-4ecb-96ad-b2c7986016e4","Type":"ContainerDied","Data":"7f44dbde35840cd77b4c881412f26908255e3218583fed8e1c6d2dd0c89853e2"} Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.724219 4719 scope.go:117] "RemoveContainer" containerID="8ccffb921f05247a14c20f8932c7bdebe8060884c1729911ee8f402025c1d9dd" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.724129 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.777048 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.778482 4719 scope.go:117] "RemoveContainer" containerID="8ccffb921f05247a14c20f8932c7bdebe8060884c1729911ee8f402025c1d9dd" Nov 24 09:13:39 crc kubenswrapper[4719]: E1124 09:13:39.779017 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ccffb921f05247a14c20f8932c7bdebe8060884c1729911ee8f402025c1d9dd\": container with ID starting with 8ccffb921f05247a14c20f8932c7bdebe8060884c1729911ee8f402025c1d9dd not found: ID does not exist" containerID="8ccffb921f05247a14c20f8932c7bdebe8060884c1729911ee8f402025c1d9dd" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.779083 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ccffb921f05247a14c20f8932c7bdebe8060884c1729911ee8f402025c1d9dd"} err="failed to get container status \"8ccffb921f05247a14c20f8932c7bdebe8060884c1729911ee8f402025c1d9dd\": rpc error: code = NotFound desc = could not find container \"8ccffb921f05247a14c20f8932c7bdebe8060884c1729911ee8f402025c1d9dd\": container with ID starting with 8ccffb921f05247a14c20f8932c7bdebe8060884c1729911ee8f402025c1d9dd not found: ID does not exist" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.786543 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.798341 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:13:39 crc kubenswrapper[4719]: E1124 09:13:39.798733 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbdd37f7-5b28-4ecb-96ad-b2c7986016e4" containerName="kube-state-metrics" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.798748 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbdd37f7-5b28-4ecb-96ad-b2c7986016e4" containerName="kube-state-metrics" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.802676 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbdd37f7-5b28-4ecb-96ad-b2c7986016e4" containerName="kube-state-metrics" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.813401 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.819141 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.825618 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.825892 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.888002 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cc7de5f2-3f27-47e7-a08e-f3b13211531a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cc7de5f2-3f27-47e7-a08e-f3b13211531a\") " pod="openstack/kube-state-metrics-0" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.888178 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prcjs\" (UniqueName: \"kubernetes.io/projected/cc7de5f2-3f27-47e7-a08e-f3b13211531a-kube-api-access-prcjs\") pod \"kube-state-metrics-0\" (UID: \"cc7de5f2-3f27-47e7-a08e-f3b13211531a\") " pod="openstack/kube-state-metrics-0" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.888235 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7de5f2-3f27-47e7-a08e-f3b13211531a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cc7de5f2-3f27-47e7-a08e-f3b13211531a\") " pod="openstack/kube-state-metrics-0" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.888284 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7de5f2-3f27-47e7-a08e-f3b13211531a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cc7de5f2-3f27-47e7-a08e-f3b13211531a\") " pod="openstack/kube-state-metrics-0" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.989826 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cc7de5f2-3f27-47e7-a08e-f3b13211531a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cc7de5f2-3f27-47e7-a08e-f3b13211531a\") " pod="openstack/kube-state-metrics-0" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.989898 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prcjs\" (UniqueName: \"kubernetes.io/projected/cc7de5f2-3f27-47e7-a08e-f3b13211531a-kube-api-access-prcjs\") pod \"kube-state-metrics-0\" (UID: \"cc7de5f2-3f27-47e7-a08e-f3b13211531a\") " pod="openstack/kube-state-metrics-0" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.989952 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7de5f2-3f27-47e7-a08e-f3b13211531a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cc7de5f2-3f27-47e7-a08e-f3b13211531a\") " pod="openstack/kube-state-metrics-0" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.990008 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7de5f2-3f27-47e7-a08e-f3b13211531a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cc7de5f2-3f27-47e7-a08e-f3b13211531a\") " pod="openstack/kube-state-metrics-0" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.997997 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7de5f2-3f27-47e7-a08e-f3b13211531a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cc7de5f2-3f27-47e7-a08e-f3b13211531a\") " pod="openstack/kube-state-metrics-0" Nov 24 09:13:39 crc kubenswrapper[4719]: I1124 09:13:39.999020 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7de5f2-3f27-47e7-a08e-f3b13211531a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cc7de5f2-3f27-47e7-a08e-f3b13211531a\") " pod="openstack/kube-state-metrics-0" Nov 24 09:13:40 crc kubenswrapper[4719]: I1124 09:13:40.007052 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cc7de5f2-3f27-47e7-a08e-f3b13211531a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cc7de5f2-3f27-47e7-a08e-f3b13211531a\") " pod="openstack/kube-state-metrics-0" Nov 24 09:13:40 crc kubenswrapper[4719]: I1124 09:13:40.008465 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prcjs\" (UniqueName: \"kubernetes.io/projected/cc7de5f2-3f27-47e7-a08e-f3b13211531a-kube-api-access-prcjs\") pod \"kube-state-metrics-0\" (UID: \"cc7de5f2-3f27-47e7-a08e-f3b13211531a\") " pod="openstack/kube-state-metrics-0" Nov 24 09:13:40 crc kubenswrapper[4719]: I1124 09:13:40.154452 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 09:13:40 crc kubenswrapper[4719]: I1124 09:13:40.534933 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbdd37f7-5b28-4ecb-96ad-b2c7986016e4" path="/var/lib/kubelet/pods/bbdd37f7-5b28-4ecb-96ad-b2c7986016e4/volumes" Nov 24 09:13:40 crc kubenswrapper[4719]: I1124 09:13:40.593966 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 09:13:40 crc kubenswrapper[4719]: I1124 09:13:40.735928 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc7de5f2-3f27-47e7-a08e-f3b13211531a","Type":"ContainerStarted","Data":"a3f789369fb1b9ddcbded6d701d7980311c3450cbaa7d574c405dee38f873ef6"} Nov 24 09:13:40 crc kubenswrapper[4719]: I1124 09:13:40.858663 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:13:40 crc kubenswrapper[4719]: I1124 09:13:40.858930 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="ceilometer-central-agent" containerID="cri-o://ddc61b0499fb44990bf207bd19e04108fbe56761fa81f5bfc83556896e702c45" gracePeriod=30 Nov 24 09:13:40 crc kubenswrapper[4719]: I1124 09:13:40.859404 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="proxy-httpd" containerID="cri-o://d0d859815d05ed30b4bb6727de2fe84138afc000dadc15f5dd2ce4ef370d1a25" gracePeriod=30 Nov 24 09:13:40 crc kubenswrapper[4719]: I1124 09:13:40.859483 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="sg-core" containerID="cri-o://48945a296e3177200cedd2747ab3e8a7580fe3c3187f4df4dfb7623e988d9730" gracePeriod=30 Nov 24 09:13:40 crc kubenswrapper[4719]: I1124 09:13:40.859519 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="ceilometer-notification-agent" containerID="cri-o://632f7ae4368e0a768f48c2590cdf51816bd1b4508aae3b1194e453738ee56307" gracePeriod=30 Nov 24 09:13:40 crc kubenswrapper[4719]: I1124 09:13:40.952199 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 09:13:40 crc kubenswrapper[4719]: I1124 09:13:40.952518 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 09:13:40 crc kubenswrapper[4719]: I1124 09:13:40.989934 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.227654 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.227703 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.255352 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.255400 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.384232 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.489678 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-zv6tm"] Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.490210 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" podUID="9df5868d-b22b-4226-831b-cf19140e059c" containerName="dnsmasq-dns" containerID="cri-o://9059a7e5933968d7b5409caf693ec3d3e5d789a1ab080816e98c45ea25e0807d" gracePeriod=10 Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.496218 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.753495 4719 generic.go:334] "Generic (PLEG): container finished" podID="9df5868d-b22b-4226-831b-cf19140e059c" containerID="9059a7e5933968d7b5409caf693ec3d3e5d789a1ab080816e98c45ea25e0807d" exitCode=0 Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.753587 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" event={"ID":"9df5868d-b22b-4226-831b-cf19140e059c","Type":"ContainerDied","Data":"9059a7e5933968d7b5409caf693ec3d3e5d789a1ab080816e98c45ea25e0807d"} Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.760440 4719 generic.go:334] "Generic (PLEG): container finished" podID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerID="d0d859815d05ed30b4bb6727de2fe84138afc000dadc15f5dd2ce4ef370d1a25" exitCode=0 Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.760478 4719 generic.go:334] "Generic (PLEG): container finished" podID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerID="48945a296e3177200cedd2747ab3e8a7580fe3c3187f4df4dfb7623e988d9730" exitCode=2 Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.760485 4719 generic.go:334] "Generic (PLEG): container finished" podID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerID="ddc61b0499fb44990bf207bd19e04108fbe56761fa81f5bfc83556896e702c45" exitCode=0 Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.760511 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f","Type":"ContainerDied","Data":"d0d859815d05ed30b4bb6727de2fe84138afc000dadc15f5dd2ce4ef370d1a25"} Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.760579 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f","Type":"ContainerDied","Data":"48945a296e3177200cedd2747ab3e8a7580fe3c3187f4df4dfb7623e988d9730"} Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.760595 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f","Type":"ContainerDied","Data":"ddc61b0499fb44990bf207bd19e04108fbe56761fa81f5bfc83556896e702c45"} Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.762495 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc7de5f2-3f27-47e7-a08e-f3b13211531a","Type":"ContainerStarted","Data":"e820e27d1341c96f569f709f4a697627c974a29a4bd9df3a241f16d5f58cdf95"} Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.763298 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.792359 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.375935182 podStartE2EDuration="2.792337347s" podCreationTimestamp="2025-11-24 09:13:39 +0000 UTC" firstStartedPulling="2025-11-24 09:13:40.61901844 +0000 UTC m=+1196.950291692" lastFinishedPulling="2025-11-24 09:13:41.035420605 +0000 UTC m=+1197.366693857" observedRunningTime="2025-11-24 09:13:41.782286768 +0000 UTC m=+1198.113560030" watchObservedRunningTime="2025-11-24 09:13:41.792337347 +0000 UTC m=+1198.123610599" Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.808918 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 09:13:41 crc kubenswrapper[4719]: I1124 09:13:41.987700 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.046826 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-dns-svc\") pod \"9df5868d-b22b-4226-831b-cf19140e059c\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.046963 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-config\") pod \"9df5868d-b22b-4226-831b-cf19140e059c\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.047065 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-ovsdbserver-sb\") pod \"9df5868d-b22b-4226-831b-cf19140e059c\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.047146 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-ovsdbserver-nb\") pod \"9df5868d-b22b-4226-831b-cf19140e059c\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.047213 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t7tl\" (UniqueName: \"kubernetes.io/projected/9df5868d-b22b-4226-831b-cf19140e059c-kube-api-access-7t7tl\") pod \"9df5868d-b22b-4226-831b-cf19140e059c\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.055271 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9df5868d-b22b-4226-831b-cf19140e059c-kube-api-access-7t7tl" (OuterVolumeSpecName: "kube-api-access-7t7tl") pod "9df5868d-b22b-4226-831b-cf19140e059c" (UID: "9df5868d-b22b-4226-831b-cf19140e059c"). InnerVolumeSpecName "kube-api-access-7t7tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.147944 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9df5868d-b22b-4226-831b-cf19140e059c" (UID: "9df5868d-b22b-4226-831b-cf19140e059c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.148299 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-ovsdbserver-nb\") pod \"9df5868d-b22b-4226-831b-cf19140e059c\" (UID: \"9df5868d-b22b-4226-831b-cf19140e059c\") " Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.149222 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9df5868d-b22b-4226-831b-cf19140e059c" (UID: "9df5868d-b22b-4226-831b-cf19140e059c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.149305 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t7tl\" (UniqueName: \"kubernetes.io/projected/9df5868d-b22b-4226-831b-cf19140e059c-kube-api-access-7t7tl\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:42 crc kubenswrapper[4719]: W1124 09:13:42.149400 4719 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9df5868d-b22b-4226-831b-cf19140e059c/volumes/kubernetes.io~configmap/ovsdbserver-nb Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.149421 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9df5868d-b22b-4226-831b-cf19140e059c" (UID: "9df5868d-b22b-4226-831b-cf19140e059c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.150726 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-config" (OuterVolumeSpecName: "config") pod "9df5868d-b22b-4226-831b-cf19140e059c" (UID: "9df5868d-b22b-4226-831b-cf19140e059c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.205575 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9df5868d-b22b-4226-831b-cf19140e059c" (UID: "9df5868d-b22b-4226-831b-cf19140e059c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.251109 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.251146 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.251160 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.251172 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9df5868d-b22b-4226-831b-cf19140e059c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.338217 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.171:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.338261 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.171:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.772421 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.772927 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-zv6tm" event={"ID":"9df5868d-b22b-4226-831b-cf19140e059c","Type":"ContainerDied","Data":"f7066d20f40b6e7f2b73efe118db988712cd62c1ddcf6fae9316c4630b8ec9f9"} Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.772967 4719 scope.go:117] "RemoveContainer" containerID="9059a7e5933968d7b5409caf693ec3d3e5d789a1ab080816e98c45ea25e0807d" Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.809210 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-zv6tm"] Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.821615 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-zv6tm"] Nov 24 09:13:42 crc kubenswrapper[4719]: I1124 09:13:42.836399 4719 scope.go:117] "RemoveContainer" containerID="f8ff30b58a642f94a8bf6253f175a86f11e7adac55e914fe8172cc17fb7ab59b" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.191416 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.287422 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-combined-ca-bundle\") pod \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.287635 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-run-httpd\") pod \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.287673 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4swq6\" (UniqueName: \"kubernetes.io/projected/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-kube-api-access-4swq6\") pod \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.287702 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-sg-core-conf-yaml\") pod \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.287735 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-log-httpd\") pod \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.287754 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-scripts\") pod \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.287826 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-config-data\") pod \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\" (UID: \"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f\") " Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.288226 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" (UID: "a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.288457 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" (UID: "a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.297578 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-kube-api-access-4swq6" (OuterVolumeSpecName: "kube-api-access-4swq6") pod "a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" (UID: "a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f"). InnerVolumeSpecName "kube-api-access-4swq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.312200 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-scripts" (OuterVolumeSpecName: "scripts") pod "a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" (UID: "a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.394891 4719 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.394927 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4swq6\" (UniqueName: \"kubernetes.io/projected/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-kube-api-access-4swq6\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.394940 4719 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.394951 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.395296 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" (UID: "a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.477604 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" (UID: "a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.496519 4719 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.496557 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.498872 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-config-data" (OuterVolumeSpecName: "config-data") pod "a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" (UID: "a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.537235 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9df5868d-b22b-4226-831b-cf19140e059c" path="/var/lib/kubelet/pods/9df5868d-b22b-4226-831b-cf19140e059c/volumes" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.600121 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.795089 4719 generic.go:334] "Generic (PLEG): container finished" podID="81328d33-7af2-4d4b-9f81-033c996a7d36" containerID="aa985b55b297acae8a118bf6107d9e386b0a250b74e57b331d34f6d884080499" exitCode=0 Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.795160 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4vlpd" event={"ID":"81328d33-7af2-4d4b-9f81-033c996a7d36","Type":"ContainerDied","Data":"aa985b55b297acae8a118bf6107d9e386b0a250b74e57b331d34f6d884080499"} Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.798395 4719 generic.go:334] "Generic (PLEG): container finished" podID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerID="632f7ae4368e0a768f48c2590cdf51816bd1b4508aae3b1194e453738ee56307" exitCode=0 Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.798435 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f","Type":"ContainerDied","Data":"632f7ae4368e0a768f48c2590cdf51816bd1b4508aae3b1194e453738ee56307"} Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.798462 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f","Type":"ContainerDied","Data":"e8e4765dde27ef81b708bc21efa7d53b97a63ceb9ff85839278db16dadb941ca"} Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.798485 4719 scope.go:117] "RemoveContainer" containerID="d0d859815d05ed30b4bb6727de2fe84138afc000dadc15f5dd2ce4ef370d1a25" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.798631 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.829418 4719 scope.go:117] "RemoveContainer" containerID="48945a296e3177200cedd2747ab3e8a7580fe3c3187f4df4dfb7623e988d9730" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.845298 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.861110 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.885266 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:13:44 crc kubenswrapper[4719]: E1124 09:13:44.885621 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="ceilometer-notification-agent" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.885637 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="ceilometer-notification-agent" Nov 24 09:13:44 crc kubenswrapper[4719]: E1124 09:13:44.885659 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df5868d-b22b-4226-831b-cf19140e059c" containerName="init" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.885665 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df5868d-b22b-4226-831b-cf19140e059c" containerName="init" Nov 24 09:13:44 crc kubenswrapper[4719]: E1124 09:13:44.885674 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9df5868d-b22b-4226-831b-cf19140e059c" containerName="dnsmasq-dns" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.885683 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="9df5868d-b22b-4226-831b-cf19140e059c" containerName="dnsmasq-dns" Nov 24 09:13:44 crc kubenswrapper[4719]: E1124 09:13:44.885697 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="proxy-httpd" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.885702 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="proxy-httpd" Nov 24 09:13:44 crc kubenswrapper[4719]: E1124 09:13:44.885714 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="sg-core" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.885719 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="sg-core" Nov 24 09:13:44 crc kubenswrapper[4719]: E1124 09:13:44.885732 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="ceilometer-central-agent" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.885738 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="ceilometer-central-agent" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.885888 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="sg-core" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.885906 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="9df5868d-b22b-4226-831b-cf19140e059c" containerName="dnsmasq-dns" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.885916 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="ceilometer-central-agent" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.885924 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="ceilometer-notification-agent" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.885936 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" containerName="proxy-httpd" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.887654 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.889554 4719 scope.go:117] "RemoveContainer" containerID="632f7ae4368e0a768f48c2590cdf51816bd1b4508aae3b1194e453738ee56307" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.890346 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.891958 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.893472 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.907941 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.956981 4719 scope.go:117] "RemoveContainer" containerID="ddc61b0499fb44990bf207bd19e04108fbe56761fa81f5bfc83556896e702c45" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.988467 4719 scope.go:117] "RemoveContainer" containerID="d0d859815d05ed30b4bb6727de2fe84138afc000dadc15f5dd2ce4ef370d1a25" Nov 24 09:13:44 crc kubenswrapper[4719]: E1124 09:13:44.988858 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0d859815d05ed30b4bb6727de2fe84138afc000dadc15f5dd2ce4ef370d1a25\": container with ID starting with d0d859815d05ed30b4bb6727de2fe84138afc000dadc15f5dd2ce4ef370d1a25 not found: ID does not exist" containerID="d0d859815d05ed30b4bb6727de2fe84138afc000dadc15f5dd2ce4ef370d1a25" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.988890 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d859815d05ed30b4bb6727de2fe84138afc000dadc15f5dd2ce4ef370d1a25"} err="failed to get container status \"d0d859815d05ed30b4bb6727de2fe84138afc000dadc15f5dd2ce4ef370d1a25\": rpc error: code = NotFound desc = could not find container \"d0d859815d05ed30b4bb6727de2fe84138afc000dadc15f5dd2ce4ef370d1a25\": container with ID starting with d0d859815d05ed30b4bb6727de2fe84138afc000dadc15f5dd2ce4ef370d1a25 not found: ID does not exist" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.988911 4719 scope.go:117] "RemoveContainer" containerID="48945a296e3177200cedd2747ab3e8a7580fe3c3187f4df4dfb7623e988d9730" Nov 24 09:13:44 crc kubenswrapper[4719]: E1124 09:13:44.989153 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48945a296e3177200cedd2747ab3e8a7580fe3c3187f4df4dfb7623e988d9730\": container with ID starting with 48945a296e3177200cedd2747ab3e8a7580fe3c3187f4df4dfb7623e988d9730 not found: ID does not exist" containerID="48945a296e3177200cedd2747ab3e8a7580fe3c3187f4df4dfb7623e988d9730" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.989174 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48945a296e3177200cedd2747ab3e8a7580fe3c3187f4df4dfb7623e988d9730"} err="failed to get container status \"48945a296e3177200cedd2747ab3e8a7580fe3c3187f4df4dfb7623e988d9730\": rpc error: code = NotFound desc = could not find container \"48945a296e3177200cedd2747ab3e8a7580fe3c3187f4df4dfb7623e988d9730\": container with ID starting with 48945a296e3177200cedd2747ab3e8a7580fe3c3187f4df4dfb7623e988d9730 not found: ID does not exist" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.989188 4719 scope.go:117] "RemoveContainer" containerID="632f7ae4368e0a768f48c2590cdf51816bd1b4508aae3b1194e453738ee56307" Nov 24 09:13:44 crc kubenswrapper[4719]: E1124 09:13:44.989404 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"632f7ae4368e0a768f48c2590cdf51816bd1b4508aae3b1194e453738ee56307\": container with ID starting with 632f7ae4368e0a768f48c2590cdf51816bd1b4508aae3b1194e453738ee56307 not found: ID does not exist" containerID="632f7ae4368e0a768f48c2590cdf51816bd1b4508aae3b1194e453738ee56307" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.989423 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"632f7ae4368e0a768f48c2590cdf51816bd1b4508aae3b1194e453738ee56307"} err="failed to get container status \"632f7ae4368e0a768f48c2590cdf51816bd1b4508aae3b1194e453738ee56307\": rpc error: code = NotFound desc = could not find container \"632f7ae4368e0a768f48c2590cdf51816bd1b4508aae3b1194e453738ee56307\": container with ID starting with 632f7ae4368e0a768f48c2590cdf51816bd1b4508aae3b1194e453738ee56307 not found: ID does not exist" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.989457 4719 scope.go:117] "RemoveContainer" containerID="ddc61b0499fb44990bf207bd19e04108fbe56761fa81f5bfc83556896e702c45" Nov 24 09:13:44 crc kubenswrapper[4719]: E1124 09:13:44.989708 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddc61b0499fb44990bf207bd19e04108fbe56761fa81f5bfc83556896e702c45\": container with ID starting with ddc61b0499fb44990bf207bd19e04108fbe56761fa81f5bfc83556896e702c45 not found: ID does not exist" containerID="ddc61b0499fb44990bf207bd19e04108fbe56761fa81f5bfc83556896e702c45" Nov 24 09:13:44 crc kubenswrapper[4719]: I1124 09:13:44.989733 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc61b0499fb44990bf207bd19e04108fbe56761fa81f5bfc83556896e702c45"} err="failed to get container status \"ddc61b0499fb44990bf207bd19e04108fbe56761fa81f5bfc83556896e702c45\": rpc error: code = NotFound desc = could not find container \"ddc61b0499fb44990bf207bd19e04108fbe56761fa81f5bfc83556896e702c45\": container with ID starting with ddc61b0499fb44990bf207bd19e04108fbe56761fa81f5bfc83556896e702c45 not found: ID does not exist" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.008372 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913a8e91-83da-4a4e-8732-1504279e5649-run-httpd\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.008445 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-config-data\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.008542 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.008579 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-scripts\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.008617 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.008639 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh5j4\" (UniqueName: \"kubernetes.io/projected/913a8e91-83da-4a4e-8732-1504279e5649-kube-api-access-vh5j4\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.008658 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.008673 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913a8e91-83da-4a4e-8732-1504279e5649-log-httpd\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.109628 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-scripts\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.109693 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.109714 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh5j4\" (UniqueName: \"kubernetes.io/projected/913a8e91-83da-4a4e-8732-1504279e5649-kube-api-access-vh5j4\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.109737 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.109751 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913a8e91-83da-4a4e-8732-1504279e5649-log-httpd\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.109793 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913a8e91-83da-4a4e-8732-1504279e5649-run-httpd\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.109816 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-config-data\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.109889 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.113402 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913a8e91-83da-4a4e-8732-1504279e5649-run-httpd\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.113701 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913a8e91-83da-4a4e-8732-1504279e5649-log-httpd\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.113985 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.114399 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.115398 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.116905 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-scripts\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.116982 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-config-data\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.136294 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh5j4\" (UniqueName: \"kubernetes.io/projected/913a8e91-83da-4a4e-8732-1504279e5649-kube-api-access-vh5j4\") pod \"ceilometer-0\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.208137 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.714628 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:13:45 crc kubenswrapper[4719]: I1124 09:13:45.808503 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913a8e91-83da-4a4e-8732-1504279e5649","Type":"ContainerStarted","Data":"9711b401f013d032bf038de124fa6763de8d9a184dd80ef2e811de2d7a7a3046"} Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.198076 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.335573 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-combined-ca-bundle\") pod \"81328d33-7af2-4d4b-9f81-033c996a7d36\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.335805 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxx5b\" (UniqueName: \"kubernetes.io/projected/81328d33-7af2-4d4b-9f81-033c996a7d36-kube-api-access-hxx5b\") pod \"81328d33-7af2-4d4b-9f81-033c996a7d36\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.335836 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-config-data\") pod \"81328d33-7af2-4d4b-9f81-033c996a7d36\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.335994 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-scripts\") pod \"81328d33-7af2-4d4b-9f81-033c996a7d36\" (UID: \"81328d33-7af2-4d4b-9f81-033c996a7d36\") " Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.344934 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81328d33-7af2-4d4b-9f81-033c996a7d36-kube-api-access-hxx5b" (OuterVolumeSpecName: "kube-api-access-hxx5b") pod "81328d33-7af2-4d4b-9f81-033c996a7d36" (UID: "81328d33-7af2-4d4b-9f81-033c996a7d36"). InnerVolumeSpecName "kube-api-access-hxx5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.345025 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-scripts" (OuterVolumeSpecName: "scripts") pod "81328d33-7af2-4d4b-9f81-033c996a7d36" (UID: "81328d33-7af2-4d4b-9f81-033c996a7d36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.402931 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-config-data" (OuterVolumeSpecName: "config-data") pod "81328d33-7af2-4d4b-9f81-033c996a7d36" (UID: "81328d33-7af2-4d4b-9f81-033c996a7d36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.404144 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81328d33-7af2-4d4b-9f81-033c996a7d36" (UID: "81328d33-7af2-4d4b-9f81-033c996a7d36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.440055 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.440091 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.440103 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxx5b\" (UniqueName: \"kubernetes.io/projected/81328d33-7af2-4d4b-9f81-033c996a7d36-kube-api-access-hxx5b\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.440113 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81328d33-7af2-4d4b-9f81-033c996a7d36-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.531531 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f" path="/var/lib/kubelet/pods/a2410b12-1a4f-4e4f-afdf-0eb4f2c0025f/volumes" Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.819520 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4vlpd" event={"ID":"81328d33-7af2-4d4b-9f81-033c996a7d36","Type":"ContainerDied","Data":"ae28d02ff711dfdc4ac3f230e1099ee6e2363895e53f9013b56caf70bb971979"} Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.819937 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae28d02ff711dfdc4ac3f230e1099ee6e2363895e53f9013b56caf70bb971979" Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.819550 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4vlpd" Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.821195 4719 generic.go:334] "Generic (PLEG): container finished" podID="28468fda-a274-493f-8a27-3aa221c5c8db" containerID="c22ed4ad0ba88baa4c6dd82e8fb8c82fda65ce23848f984ff2c624d6ec0cf5d5" exitCode=0 Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.821257 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x8mh9" event={"ID":"28468fda-a274-493f-8a27-3aa221c5c8db","Type":"ContainerDied","Data":"c22ed4ad0ba88baa4c6dd82e8fb8c82fda65ce23848f984ff2c624d6ec0cf5d5"} Nov 24 09:13:46 crc kubenswrapper[4719]: I1124 09:13:46.823705 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913a8e91-83da-4a4e-8732-1504279e5649","Type":"ContainerStarted","Data":"00d0f4a33be884f06a8680781d9aa86853777026adc2f235b12e82b0f4a0683e"} Nov 24 09:13:47 crc kubenswrapper[4719]: I1124 09:13:47.026528 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:13:47 crc kubenswrapper[4719]: I1124 09:13:47.026778 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" containerName="nova-api-log" containerID="cri-o://1472117c95c8ec256fdb955907a7a77c26f6c827adb42c92e66cb362dafddddf" gracePeriod=30 Nov 24 09:13:47 crc kubenswrapper[4719]: I1124 09:13:47.026823 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" containerName="nova-api-api" containerID="cri-o://304d78b33ed8a8f9b62e0d8122519025d605cc8295155c2b1cf2f165c29cf6c5" gracePeriod=30 Nov 24 09:13:47 crc kubenswrapper[4719]: I1124 09:13:47.038917 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:13:47 crc kubenswrapper[4719]: I1124 09:13:47.039126 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="40976df7-3ef1-4b17-96d1-d259648b046c" containerName="nova-scheduler-scheduler" containerID="cri-o://de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38" gracePeriod=30 Nov 24 09:13:47 crc kubenswrapper[4719]: I1124 09:13:47.834334 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913a8e91-83da-4a4e-8732-1504279e5649","Type":"ContainerStarted","Data":"3f8129f4b6f4ef204f37972d3c87a45adbfdcd72215150e6f25743b21a45e4d3"} Nov 24 09:13:47 crc kubenswrapper[4719]: I1124 09:13:47.837904 4719 generic.go:334] "Generic (PLEG): container finished" podID="eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" containerID="1472117c95c8ec256fdb955907a7a77c26f6c827adb42c92e66cb362dafddddf" exitCode=143 Nov 24 09:13:47 crc kubenswrapper[4719]: I1124 09:13:47.838001 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f","Type":"ContainerDied","Data":"1472117c95c8ec256fdb955907a7a77c26f6c827adb42c92e66cb362dafddddf"} Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.190669 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.268579 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-combined-ca-bundle\") pod \"28468fda-a274-493f-8a27-3aa221c5c8db\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.268886 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-config-data\") pod \"28468fda-a274-493f-8a27-3aa221c5c8db\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.268945 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwlqd\" (UniqueName: \"kubernetes.io/projected/28468fda-a274-493f-8a27-3aa221c5c8db-kube-api-access-lwlqd\") pod \"28468fda-a274-493f-8a27-3aa221c5c8db\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.269191 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-scripts\") pod \"28468fda-a274-493f-8a27-3aa221c5c8db\" (UID: \"28468fda-a274-493f-8a27-3aa221c5c8db\") " Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.274275 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-scripts" (OuterVolumeSpecName: "scripts") pod "28468fda-a274-493f-8a27-3aa221c5c8db" (UID: "28468fda-a274-493f-8a27-3aa221c5c8db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.276998 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28468fda-a274-493f-8a27-3aa221c5c8db-kube-api-access-lwlqd" (OuterVolumeSpecName: "kube-api-access-lwlqd") pod "28468fda-a274-493f-8a27-3aa221c5c8db" (UID: "28468fda-a274-493f-8a27-3aa221c5c8db"). InnerVolumeSpecName "kube-api-access-lwlqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.318006 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28468fda-a274-493f-8a27-3aa221c5c8db" (UID: "28468fda-a274-493f-8a27-3aa221c5c8db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.327680 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-config-data" (OuterVolumeSpecName: "config-data") pod "28468fda-a274-493f-8a27-3aa221c5c8db" (UID: "28468fda-a274-493f-8a27-3aa221c5c8db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.371309 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.371332 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.371343 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28468fda-a274-493f-8a27-3aa221c5c8db-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.371351 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwlqd\" (UniqueName: \"kubernetes.io/projected/28468fda-a274-493f-8a27-3aa221c5c8db-kube-api-access-lwlqd\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.848056 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913a8e91-83da-4a4e-8732-1504279e5649","Type":"ContainerStarted","Data":"a362a878719f6e16dc1f16b17e2b5402045119d21de75cc55212a5e6d748ff84"} Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.849598 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-x8mh9" event={"ID":"28468fda-a274-493f-8a27-3aa221c5c8db","Type":"ContainerDied","Data":"1f2cb26c98b5b2c62df52bf5a1254634b3145573f3b98e09b34b41fad130bdfd"} Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.849622 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f2cb26c98b5b2c62df52bf5a1254634b3145573f3b98e09b34b41fad130bdfd" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.849671 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-x8mh9" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.949940 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 09:13:48 crc kubenswrapper[4719]: E1124 09:13:48.950410 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28468fda-a274-493f-8a27-3aa221c5c8db" containerName="nova-cell1-conductor-db-sync" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.950433 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="28468fda-a274-493f-8a27-3aa221c5c8db" containerName="nova-cell1-conductor-db-sync" Nov 24 09:13:48 crc kubenswrapper[4719]: E1124 09:13:48.950451 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81328d33-7af2-4d4b-9f81-033c996a7d36" containerName="nova-manage" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.950461 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="81328d33-7af2-4d4b-9f81-033c996a7d36" containerName="nova-manage" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.950688 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="28468fda-a274-493f-8a27-3aa221c5c8db" containerName="nova-cell1-conductor-db-sync" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.950716 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="81328d33-7af2-4d4b-9f81-033c996a7d36" containerName="nova-manage" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.951408 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.952938 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.971498 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.981829 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.981927 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8xrl\" (UniqueName: \"kubernetes.io/projected/3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f-kube-api-access-z8xrl\") pod \"nova-cell1-conductor-0\" (UID: \"3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:13:48 crc kubenswrapper[4719]: I1124 09:13:48.982055 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:13:49 crc kubenswrapper[4719]: I1124 09:13:49.084008 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:13:49 crc kubenswrapper[4719]: I1124 09:13:49.084337 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:13:49 crc kubenswrapper[4719]: I1124 09:13:49.084396 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8xrl\" (UniqueName: \"kubernetes.io/projected/3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f-kube-api-access-z8xrl\") pod \"nova-cell1-conductor-0\" (UID: \"3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:13:49 crc kubenswrapper[4719]: I1124 09:13:49.090923 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:13:49 crc kubenswrapper[4719]: I1124 09:13:49.095785 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:13:49 crc kubenswrapper[4719]: I1124 09:13:49.104491 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8xrl\" (UniqueName: \"kubernetes.io/projected/3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f-kube-api-access-z8xrl\") pod \"nova-cell1-conductor-0\" (UID: \"3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f\") " pod="openstack/nova-cell1-conductor-0" Nov 24 09:13:49 crc kubenswrapper[4719]: I1124 09:13:49.271171 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 09:13:49 crc kubenswrapper[4719]: I1124 09:13:49.750396 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 09:13:49 crc kubenswrapper[4719]: W1124 09:13:49.751982 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c77db21_d39a_4c8d_bd9d_a4e4c3d37a3f.slice/crio-a5d4569872741a4c1dc93dc6da8595d2d8ba7bc9b62661d0aca548b8e8b7b493 WatchSource:0}: Error finding container a5d4569872741a4c1dc93dc6da8595d2d8ba7bc9b62661d0aca548b8e8b7b493: Status 404 returned error can't find the container with id a5d4569872741a4c1dc93dc6da8595d2d8ba7bc9b62661d0aca548b8e8b7b493 Nov 24 09:13:49 crc kubenswrapper[4719]: I1124 09:13:49.865304 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913a8e91-83da-4a4e-8732-1504279e5649","Type":"ContainerStarted","Data":"9fb8dfc84cd5360d65433f0cacc62fec64f699d0f9476caf55c74352dbe29f1a"} Nov 24 09:13:49 crc kubenswrapper[4719]: I1124 09:13:49.866710 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:13:49 crc kubenswrapper[4719]: I1124 09:13:49.875866 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f","Type":"ContainerStarted","Data":"a5d4569872741a4c1dc93dc6da8595d2d8ba7bc9b62661d0aca548b8e8b7b493"} Nov 24 09:13:49 crc kubenswrapper[4719]: I1124 09:13:49.893580 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.312233926 podStartE2EDuration="5.893559458s" podCreationTimestamp="2025-11-24 09:13:44 +0000 UTC" firstStartedPulling="2025-11-24 09:13:45.71735298 +0000 UTC m=+1202.048626232" lastFinishedPulling="2025-11-24 09:13:49.298678512 +0000 UTC m=+1205.629951764" observedRunningTime="2025-11-24 09:13:49.885166506 +0000 UTC m=+1206.216439778" watchObservedRunningTime="2025-11-24 09:13:49.893559458 +0000 UTC m=+1206.224832730" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.172817 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.660049 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.714721 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xk9r\" (UniqueName: \"kubernetes.io/projected/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-kube-api-access-9xk9r\") pod \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.714833 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-combined-ca-bundle\") pod \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.714993 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-logs\") pod \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.715086 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-config-data\") pod \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\" (UID: \"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f\") " Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.715582 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-logs" (OuterVolumeSpecName: "logs") pod "eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" (UID: "eb8ab3ac-9a06-4d64-a124-cd50c76dae7f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.751875 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-kube-api-access-9xk9r" (OuterVolumeSpecName: "kube-api-access-9xk9r") pod "eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" (UID: "eb8ab3ac-9a06-4d64-a124-cd50c76dae7f"). InnerVolumeSpecName "kube-api-access-9xk9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.756317 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" (UID: "eb8ab3ac-9a06-4d64-a124-cd50c76dae7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.771164 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-config-data" (OuterVolumeSpecName: "config-data") pod "eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" (UID: "eb8ab3ac-9a06-4d64-a124-cd50c76dae7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.817702 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xk9r\" (UniqueName: \"kubernetes.io/projected/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-kube-api-access-9xk9r\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.817743 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.817754 4719 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.817765 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.888114 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f","Type":"ContainerStarted","Data":"47aadcc2c49b35f7cf67cd9d08e5acab00c28d1d49b46cc08b5ee93db9832218"} Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.888265 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.899424 4719 generic.go:334] "Generic (PLEG): container finished" podID="eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" containerID="304d78b33ed8a8f9b62e0d8122519025d605cc8295155c2b1cf2f165c29cf6c5" exitCode=0 Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.900587 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.901263 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f","Type":"ContainerDied","Data":"304d78b33ed8a8f9b62e0d8122519025d605cc8295155c2b1cf2f165c29cf6c5"} Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.901314 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb8ab3ac-9a06-4d64-a124-cd50c76dae7f","Type":"ContainerDied","Data":"abe0a9a5fbc8c9145449d1b8b6e9dd0d9149ed49cee6218a22af0c80cb190d3d"} Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.901335 4719 scope.go:117] "RemoveContainer" containerID="304d78b33ed8a8f9b62e0d8122519025d605cc8295155c2b1cf2f165c29cf6c5" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.937874 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.9378548 podStartE2EDuration="2.9378548s" podCreationTimestamp="2025-11-24 09:13:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:13:50.909116592 +0000 UTC m=+1207.240389854" watchObservedRunningTime="2025-11-24 09:13:50.9378548 +0000 UTC m=+1207.269128052" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.951096 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.960627 4719 scope.go:117] "RemoveContainer" containerID="1472117c95c8ec256fdb955907a7a77c26f6c827adb42c92e66cb362dafddddf" Nov 24 09:13:50 crc kubenswrapper[4719]: E1124 09:13:50.962230 4719 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.970275 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:13:50 crc kubenswrapper[4719]: E1124 09:13:50.982286 4719 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.989092 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 09:13:50 crc kubenswrapper[4719]: E1124 09:13:50.989476 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" containerName="nova-api-log" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.989492 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" containerName="nova-api-log" Nov 24 09:13:50 crc kubenswrapper[4719]: E1124 09:13:50.989506 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" containerName="nova-api-api" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.989512 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" containerName="nova-api-api" Nov 24 09:13:50 crc kubenswrapper[4719]: E1124 09:13:50.989580 4719 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 09:13:50 crc kubenswrapper[4719]: E1124 09:13:50.989648 4719 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="40976df7-3ef1-4b17-96d1-d259648b046c" containerName="nova-scheduler-scheduler" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.989683 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" containerName="nova-api-log" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.989693 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" containerName="nova-api-api" Nov 24 09:13:50 crc kubenswrapper[4719]: I1124 09:13:50.990578 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.004677 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.016754 4719 scope.go:117] "RemoveContainer" containerID="304d78b33ed8a8f9b62e0d8122519025d605cc8295155c2b1cf2f165c29cf6c5" Nov 24 09:13:51 crc kubenswrapper[4719]: E1124 09:13:51.017234 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"304d78b33ed8a8f9b62e0d8122519025d605cc8295155c2b1cf2f165c29cf6c5\": container with ID starting with 304d78b33ed8a8f9b62e0d8122519025d605cc8295155c2b1cf2f165c29cf6c5 not found: ID does not exist" containerID="304d78b33ed8a8f9b62e0d8122519025d605cc8295155c2b1cf2f165c29cf6c5" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.017262 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"304d78b33ed8a8f9b62e0d8122519025d605cc8295155c2b1cf2f165c29cf6c5"} err="failed to get container status \"304d78b33ed8a8f9b62e0d8122519025d605cc8295155c2b1cf2f165c29cf6c5\": rpc error: code = NotFound desc = could not find container \"304d78b33ed8a8f9b62e0d8122519025d605cc8295155c2b1cf2f165c29cf6c5\": container with ID starting with 304d78b33ed8a8f9b62e0d8122519025d605cc8295155c2b1cf2f165c29cf6c5 not found: ID does not exist" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.017291 4719 scope.go:117] "RemoveContainer" containerID="1472117c95c8ec256fdb955907a7a77c26f6c827adb42c92e66cb362dafddddf" Nov 24 09:13:51 crc kubenswrapper[4719]: E1124 09:13:51.017546 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1472117c95c8ec256fdb955907a7a77c26f6c827adb42c92e66cb362dafddddf\": container with ID starting with 1472117c95c8ec256fdb955907a7a77c26f6c827adb42c92e66cb362dafddddf not found: ID does not exist" containerID="1472117c95c8ec256fdb955907a7a77c26f6c827adb42c92e66cb362dafddddf" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.017567 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1472117c95c8ec256fdb955907a7a77c26f6c827adb42c92e66cb362dafddddf"} err="failed to get container status \"1472117c95c8ec256fdb955907a7a77c26f6c827adb42c92e66cb362dafddddf\": rpc error: code = NotFound desc = could not find container \"1472117c95c8ec256fdb955907a7a77c26f6c827adb42c92e66cb362dafddddf\": container with ID starting with 1472117c95c8ec256fdb955907a7a77c26f6c827adb42c92e66cb362dafddddf not found: ID does not exist" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.021295 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.027353 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdm4d\" (UniqueName: \"kubernetes.io/projected/d545ecc5-413b-4c56-99c5-7b709da09b51-kube-api-access-zdm4d\") pod \"nova-api-0\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " pod="openstack/nova-api-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.027566 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d545ecc5-413b-4c56-99c5-7b709da09b51-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " pod="openstack/nova-api-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.027649 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d545ecc5-413b-4c56-99c5-7b709da09b51-config-data\") pod \"nova-api-0\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " pod="openstack/nova-api-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.027732 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d545ecc5-413b-4c56-99c5-7b709da09b51-logs\") pod \"nova-api-0\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " pod="openstack/nova-api-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.128907 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdm4d\" (UniqueName: \"kubernetes.io/projected/d545ecc5-413b-4c56-99c5-7b709da09b51-kube-api-access-zdm4d\") pod \"nova-api-0\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " pod="openstack/nova-api-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.128984 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d545ecc5-413b-4c56-99c5-7b709da09b51-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " pod="openstack/nova-api-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.129008 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d545ecc5-413b-4c56-99c5-7b709da09b51-config-data\") pod \"nova-api-0\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " pod="openstack/nova-api-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.129061 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d545ecc5-413b-4c56-99c5-7b709da09b51-logs\") pod \"nova-api-0\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " pod="openstack/nova-api-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.129555 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d545ecc5-413b-4c56-99c5-7b709da09b51-logs\") pod \"nova-api-0\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " pod="openstack/nova-api-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.134274 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d545ecc5-413b-4c56-99c5-7b709da09b51-config-data\") pod \"nova-api-0\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " pod="openstack/nova-api-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.135626 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d545ecc5-413b-4c56-99c5-7b709da09b51-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " pod="openstack/nova-api-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.155446 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdm4d\" (UniqueName: \"kubernetes.io/projected/d545ecc5-413b-4c56-99c5-7b709da09b51-kube-api-access-zdm4d\") pod \"nova-api-0\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " pod="openstack/nova-api-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.327239 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.548801 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.637528 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7x26\" (UniqueName: \"kubernetes.io/projected/40976df7-3ef1-4b17-96d1-d259648b046c-kube-api-access-w7x26\") pod \"40976df7-3ef1-4b17-96d1-d259648b046c\" (UID: \"40976df7-3ef1-4b17-96d1-d259648b046c\") " Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.637569 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40976df7-3ef1-4b17-96d1-d259648b046c-combined-ca-bundle\") pod \"40976df7-3ef1-4b17-96d1-d259648b046c\" (UID: \"40976df7-3ef1-4b17-96d1-d259648b046c\") " Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.637728 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40976df7-3ef1-4b17-96d1-d259648b046c-config-data\") pod \"40976df7-3ef1-4b17-96d1-d259648b046c\" (UID: \"40976df7-3ef1-4b17-96d1-d259648b046c\") " Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.643996 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40976df7-3ef1-4b17-96d1-d259648b046c-kube-api-access-w7x26" (OuterVolumeSpecName: "kube-api-access-w7x26") pod "40976df7-3ef1-4b17-96d1-d259648b046c" (UID: "40976df7-3ef1-4b17-96d1-d259648b046c"). InnerVolumeSpecName "kube-api-access-w7x26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.686220 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40976df7-3ef1-4b17-96d1-d259648b046c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40976df7-3ef1-4b17-96d1-d259648b046c" (UID: "40976df7-3ef1-4b17-96d1-d259648b046c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.697302 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40976df7-3ef1-4b17-96d1-d259648b046c-config-data" (OuterVolumeSpecName: "config-data") pod "40976df7-3ef1-4b17-96d1-d259648b046c" (UID: "40976df7-3ef1-4b17-96d1-d259648b046c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.739973 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7x26\" (UniqueName: \"kubernetes.io/projected/40976df7-3ef1-4b17-96d1-d259648b046c-kube-api-access-w7x26\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.740010 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40976df7-3ef1-4b17-96d1-d259648b046c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.740024 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40976df7-3ef1-4b17-96d1-d259648b046c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.890360 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.913799 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d545ecc5-413b-4c56-99c5-7b709da09b51","Type":"ContainerStarted","Data":"319647b31a0b56ed2a7bd97b701f09f1c88bc5b4ad2e0d62d350fd78a785454d"} Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.915859 4719 generic.go:334] "Generic (PLEG): container finished" podID="40976df7-3ef1-4b17-96d1-d259648b046c" containerID="de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38" exitCode=0 Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.915918 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40976df7-3ef1-4b17-96d1-d259648b046c","Type":"ContainerDied","Data":"de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38"} Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.915995 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.916237 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40976df7-3ef1-4b17-96d1-d259648b046c","Type":"ContainerDied","Data":"d244215fccc3c0c98861d61566ec3e2cb757c731969481453d6895f1d4b3f321"} Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.916265 4719 scope.go:117] "RemoveContainer" containerID="de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38" Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.967096 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:13:51 crc kubenswrapper[4719]: I1124 09:13:51.983319 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:51.992645 4719 scope.go:117] "RemoveContainer" containerID="de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38" Nov 24 09:13:52 crc kubenswrapper[4719]: E1124 09:13:51.997275 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38\": container with ID starting with de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38 not found: ID does not exist" containerID="de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:51.997320 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38"} err="failed to get container status \"de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38\": rpc error: code = NotFound desc = could not find container \"de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38\": container with ID starting with de1730698af6e0bd85aca486945ece71a92dad046d8c7e37da414d76beb05c38 not found: ID does not exist" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.036492 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:13:52 crc kubenswrapper[4719]: E1124 09:13:52.037857 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40976df7-3ef1-4b17-96d1-d259648b046c" containerName="nova-scheduler-scheduler" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.037892 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="40976df7-3ef1-4b17-96d1-d259648b046c" containerName="nova-scheduler-scheduler" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.038247 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="40976df7-3ef1-4b17-96d1-d259648b046c" containerName="nova-scheduler-scheduler" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.039093 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.044761 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.084248 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.173021 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sql7s\" (UniqueName: \"kubernetes.io/projected/200b3f6a-9274-440c-885d-e69a1a5d69e1-kube-api-access-sql7s\") pod \"nova-scheduler-0\" (UID: \"200b3f6a-9274-440c-885d-e69a1a5d69e1\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.173252 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/200b3f6a-9274-440c-885d-e69a1a5d69e1-config-data\") pod \"nova-scheduler-0\" (UID: \"200b3f6a-9274-440c-885d-e69a1a5d69e1\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.173299 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200b3f6a-9274-440c-885d-e69a1a5d69e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"200b3f6a-9274-440c-885d-e69a1a5d69e1\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.274837 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/200b3f6a-9274-440c-885d-e69a1a5d69e1-config-data\") pod \"nova-scheduler-0\" (UID: \"200b3f6a-9274-440c-885d-e69a1a5d69e1\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.274962 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200b3f6a-9274-440c-885d-e69a1a5d69e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"200b3f6a-9274-440c-885d-e69a1a5d69e1\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.275015 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sql7s\" (UniqueName: \"kubernetes.io/projected/200b3f6a-9274-440c-885d-e69a1a5d69e1-kube-api-access-sql7s\") pod \"nova-scheduler-0\" (UID: \"200b3f6a-9274-440c-885d-e69a1a5d69e1\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.278685 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/200b3f6a-9274-440c-885d-e69a1a5d69e1-config-data\") pod \"nova-scheduler-0\" (UID: \"200b3f6a-9274-440c-885d-e69a1a5d69e1\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.280057 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200b3f6a-9274-440c-885d-e69a1a5d69e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"200b3f6a-9274-440c-885d-e69a1a5d69e1\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.294531 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sql7s\" (UniqueName: \"kubernetes.io/projected/200b3f6a-9274-440c-885d-e69a1a5d69e1-kube-api-access-sql7s\") pod \"nova-scheduler-0\" (UID: \"200b3f6a-9274-440c-885d-e69a1a5d69e1\") " pod="openstack/nova-scheduler-0" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.397769 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.546092 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40976df7-3ef1-4b17-96d1-d259648b046c" path="/var/lib/kubelet/pods/40976df7-3ef1-4b17-96d1-d259648b046c/volumes" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.548024 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb8ab3ac-9a06-4d64-a124-cd50c76dae7f" path="/var/lib/kubelet/pods/eb8ab3ac-9a06-4d64-a124-cd50c76dae7f/volumes" Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.927815 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.934499 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d545ecc5-413b-4c56-99c5-7b709da09b51","Type":"ContainerStarted","Data":"12f52c576432d36b008139cbd30750731a31b8112afe813b2c99b6fb70dc080c"} Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.934537 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d545ecc5-413b-4c56-99c5-7b709da09b51","Type":"ContainerStarted","Data":"b0ea798408b88995b596effd4c5987ed9b8ef43ae39a446cbae0909365064051"} Nov 24 09:13:52 crc kubenswrapper[4719]: W1124 09:13:52.936716 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod200b3f6a_9274_440c_885d_e69a1a5d69e1.slice/crio-6f4610f0581e954f564caee9ef04e1e2d56aad0337057d158aee9b247bb4fadd WatchSource:0}: Error finding container 6f4610f0581e954f564caee9ef04e1e2d56aad0337057d158aee9b247bb4fadd: Status 404 returned error can't find the container with id 6f4610f0581e954f564caee9ef04e1e2d56aad0337057d158aee9b247bb4fadd Nov 24 09:13:52 crc kubenswrapper[4719]: I1124 09:13:52.956231 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.956211169 podStartE2EDuration="2.956211169s" podCreationTimestamp="2025-11-24 09:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:13:52.953598194 +0000 UTC m=+1209.284871466" watchObservedRunningTime="2025-11-24 09:13:52.956211169 +0000 UTC m=+1209.287484421" Nov 24 09:13:53 crc kubenswrapper[4719]: I1124 09:13:53.947790 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"200b3f6a-9274-440c-885d-e69a1a5d69e1","Type":"ContainerStarted","Data":"2f6b8533594470374a859bcc8fa74514033923b43a44a2ad899fb2f05275d0bd"} Nov 24 09:13:53 crc kubenswrapper[4719]: I1124 09:13:53.948161 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"200b3f6a-9274-440c-885d-e69a1a5d69e1","Type":"ContainerStarted","Data":"6f4610f0581e954f564caee9ef04e1e2d56aad0337057d158aee9b247bb4fadd"} Nov 24 09:13:53 crc kubenswrapper[4719]: I1124 09:13:53.967873 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.9678552099999997 podStartE2EDuration="2.96785521s" podCreationTimestamp="2025-11-24 09:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:13:53.961840327 +0000 UTC m=+1210.293113609" watchObservedRunningTime="2025-11-24 09:13:53.96785521 +0000 UTC m=+1210.299128462" Nov 24 09:13:54 crc kubenswrapper[4719]: I1124 09:13:54.299217 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 09:13:57 crc kubenswrapper[4719]: I1124 09:13:57.398871 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 09:14:01 crc kubenswrapper[4719]: I1124 09:14:01.328170 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 09:14:01 crc kubenswrapper[4719]: I1124 09:14:01.328448 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 09:14:02 crc kubenswrapper[4719]: I1124 09:14:02.398399 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 09:14:02 crc kubenswrapper[4719]: I1124 09:14:02.413303 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d545ecc5-413b-4c56-99c5-7b709da09b51" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.178:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:14:02 crc kubenswrapper[4719]: I1124 09:14:02.413425 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d545ecc5-413b-4c56-99c5-7b709da09b51" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.178:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:14:02 crc kubenswrapper[4719]: I1124 09:14:02.433251 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 09:14:03 crc kubenswrapper[4719]: I1124 09:14:03.091938 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.103555 4719 generic.go:334] "Generic (PLEG): container finished" podID="16402f86-bc6a-4127-8e64-e9eb25435527" containerID="a0f9aa744fb58ae041a6606547c8f734361f5a55553664878402c58e447a82d5" exitCode=137 Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.103677 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"16402f86-bc6a-4127-8e64-e9eb25435527","Type":"ContainerDied","Data":"a0f9aa744fb58ae041a6606547c8f734361f5a55553664878402c58e447a82d5"} Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.108252 4719 generic.go:334] "Generic (PLEG): container finished" podID="91c47491-6b0b-466c-adb6-90f4a4ca9728" containerID="43fea711240764429e6a9ab28d7fd1e0e45e9905e5cf1936db6b6da1e2276717" exitCode=137 Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.108288 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91c47491-6b0b-466c-adb6-90f4a4ca9728","Type":"ContainerDied","Data":"43fea711240764429e6a9ab28d7fd1e0e45e9905e5cf1936db6b6da1e2276717"} Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.108318 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"91c47491-6b0b-466c-adb6-90f4a4ca9728","Type":"ContainerDied","Data":"a8336da7f5ef5700f80660ac954279eff470c90675036215a77ef1e7ca84377e"} Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.108335 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8336da7f5ef5700f80660ac954279eff470c90675036215a77ef1e7ca84377e" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.131541 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.222666 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.305770 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91c47491-6b0b-466c-adb6-90f4a4ca9728-logs\") pod \"91c47491-6b0b-466c-adb6-90f4a4ca9728\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.305810 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9n2f\" (UniqueName: \"kubernetes.io/projected/91c47491-6b0b-466c-adb6-90f4a4ca9728-kube-api-access-p9n2f\") pod \"91c47491-6b0b-466c-adb6-90f4a4ca9728\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.306026 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91c47491-6b0b-466c-adb6-90f4a4ca9728-config-data\") pod \"91c47491-6b0b-466c-adb6-90f4a4ca9728\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.306085 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c47491-6b0b-466c-adb6-90f4a4ca9728-combined-ca-bundle\") pod \"91c47491-6b0b-466c-adb6-90f4a4ca9728\" (UID: \"91c47491-6b0b-466c-adb6-90f4a4ca9728\") " Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.307465 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91c47491-6b0b-466c-adb6-90f4a4ca9728-logs" (OuterVolumeSpecName: "logs") pod "91c47491-6b0b-466c-adb6-90f4a4ca9728" (UID: "91c47491-6b0b-466c-adb6-90f4a4ca9728"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.311301 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91c47491-6b0b-466c-adb6-90f4a4ca9728-kube-api-access-p9n2f" (OuterVolumeSpecName: "kube-api-access-p9n2f") pod "91c47491-6b0b-466c-adb6-90f4a4ca9728" (UID: "91c47491-6b0b-466c-adb6-90f4a4ca9728"). InnerVolumeSpecName "kube-api-access-p9n2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.329410 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91c47491-6b0b-466c-adb6-90f4a4ca9728-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91c47491-6b0b-466c-adb6-90f4a4ca9728" (UID: "91c47491-6b0b-466c-adb6-90f4a4ca9728"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.332776 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91c47491-6b0b-466c-adb6-90f4a4ca9728-config-data" (OuterVolumeSpecName: "config-data") pod "91c47491-6b0b-466c-adb6-90f4a4ca9728" (UID: "91c47491-6b0b-466c-adb6-90f4a4ca9728"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.408216 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16402f86-bc6a-4127-8e64-e9eb25435527-config-data\") pod \"16402f86-bc6a-4127-8e64-e9eb25435527\" (UID: \"16402f86-bc6a-4127-8e64-e9eb25435527\") " Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.408523 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16402f86-bc6a-4127-8e64-e9eb25435527-combined-ca-bundle\") pod \"16402f86-bc6a-4127-8e64-e9eb25435527\" (UID: \"16402f86-bc6a-4127-8e64-e9eb25435527\") " Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.408567 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc8m2\" (UniqueName: \"kubernetes.io/projected/16402f86-bc6a-4127-8e64-e9eb25435527-kube-api-access-kc8m2\") pod \"16402f86-bc6a-4127-8e64-e9eb25435527\" (UID: \"16402f86-bc6a-4127-8e64-e9eb25435527\") " Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.408982 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91c47491-6b0b-466c-adb6-90f4a4ca9728-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.409007 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c47491-6b0b-466c-adb6-90f4a4ca9728-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.409021 4719 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91c47491-6b0b-466c-adb6-90f4a4ca9728-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.409050 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9n2f\" (UniqueName: \"kubernetes.io/projected/91c47491-6b0b-466c-adb6-90f4a4ca9728-kube-api-access-p9n2f\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.411006 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16402f86-bc6a-4127-8e64-e9eb25435527-kube-api-access-kc8m2" (OuterVolumeSpecName: "kube-api-access-kc8m2") pod "16402f86-bc6a-4127-8e64-e9eb25435527" (UID: "16402f86-bc6a-4127-8e64-e9eb25435527"). InnerVolumeSpecName "kube-api-access-kc8m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.433916 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16402f86-bc6a-4127-8e64-e9eb25435527-config-data" (OuterVolumeSpecName: "config-data") pod "16402f86-bc6a-4127-8e64-e9eb25435527" (UID: "16402f86-bc6a-4127-8e64-e9eb25435527"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.439727 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16402f86-bc6a-4127-8e64-e9eb25435527-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16402f86-bc6a-4127-8e64-e9eb25435527" (UID: "16402f86-bc6a-4127-8e64-e9eb25435527"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.510870 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16402f86-bc6a-4127-8e64-e9eb25435527-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.510912 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc8m2\" (UniqueName: \"kubernetes.io/projected/16402f86-bc6a-4127-8e64-e9eb25435527-kube-api-access-kc8m2\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:08 crc kubenswrapper[4719]: I1124 09:14:08.510924 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16402f86-bc6a-4127-8e64-e9eb25435527-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.118763 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.118761 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"16402f86-bc6a-4127-8e64-e9eb25435527","Type":"ContainerDied","Data":"38646ebbb9bf4318010fe4d64079739c44cea8a53d852b5ec6375db67aabc36c"} Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.119244 4719 scope.go:117] "RemoveContainer" containerID="a0f9aa744fb58ae041a6606547c8f734361f5a55553664878402c58e447a82d5" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.118921 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.170748 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.185189 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.197095 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.208878 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.221946 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:14:09 crc kubenswrapper[4719]: E1124 09:14:09.223181 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c47491-6b0b-466c-adb6-90f4a4ca9728" containerName="nova-metadata-log" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.223212 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c47491-6b0b-466c-adb6-90f4a4ca9728" containerName="nova-metadata-log" Nov 24 09:14:09 crc kubenswrapper[4719]: E1124 09:14:09.223520 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16402f86-bc6a-4127-8e64-e9eb25435527" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.223554 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="16402f86-bc6a-4127-8e64-e9eb25435527" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 09:14:09 crc kubenswrapper[4719]: E1124 09:14:09.223605 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c47491-6b0b-466c-adb6-90f4a4ca9728" containerName="nova-metadata-metadata" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.223615 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c47491-6b0b-466c-adb6-90f4a4ca9728" containerName="nova-metadata-metadata" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.224453 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c47491-6b0b-466c-adb6-90f4a4ca9728" containerName="nova-metadata-metadata" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.224517 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c47491-6b0b-466c-adb6-90f4a4ca9728" containerName="nova-metadata-log" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.224539 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="16402f86-bc6a-4127-8e64-e9eb25435527" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.227029 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.229365 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.229386 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.229851 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.260294 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.269834 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.272416 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.272624 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.275478 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.285615 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.428149 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.428221 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.428246 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.428267 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5p5r\" (UniqueName: \"kubernetes.io/projected/8dff5214-54a8-41e2-9e2d-d1e491ba2565-kube-api-access-t5p5r\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.428808 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.428902 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgxqw\" (UniqueName: \"kubernetes.io/projected/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-kube-api-access-rgxqw\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.428929 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.428995 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-config-data\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.429025 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dff5214-54a8-41e2-9e2d-d1e491ba2565-logs\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.429069 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.530240 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-config-data\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.530520 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dff5214-54a8-41e2-9e2d-d1e491ba2565-logs\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.530637 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.531699 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dff5214-54a8-41e2-9e2d-d1e491ba2565-logs\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.533304 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.534748 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.534799 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.534835 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5p5r\" (UniqueName: \"kubernetes.io/projected/8dff5214-54a8-41e2-9e2d-d1e491ba2565-kube-api-access-t5p5r\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.534888 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.535028 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgxqw\" (UniqueName: \"kubernetes.io/projected/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-kube-api-access-rgxqw\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.535073 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.537074 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-config-data\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.539206 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.539922 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.540717 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.543894 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.543987 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.544377 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.560954 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgxqw\" (UniqueName: \"kubernetes.io/projected/6229cd6f-c2de-47c4-9edf-99ebeddaf05b-kube-api-access-rgxqw\") pod \"nova-cell1-novncproxy-0\" (UID: \"6229cd6f-c2de-47c4-9edf-99ebeddaf05b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.563460 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5p5r\" (UniqueName: \"kubernetes.io/projected/8dff5214-54a8-41e2-9e2d-d1e491ba2565-kube-api-access-t5p5r\") pod \"nova-metadata-0\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.594123 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:14:09 crc kubenswrapper[4719]: I1124 09:14:09.859544 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:10 crc kubenswrapper[4719]: I1124 09:14:10.061687 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:14:10 crc kubenswrapper[4719]: W1124 09:14:10.072308 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dff5214_54a8_41e2_9e2d_d1e491ba2565.slice/crio-98d11bcce40f06aa9fa914b8d0780cba03f962c38cc22092f1f2d73d24a8b2f6 WatchSource:0}: Error finding container 98d11bcce40f06aa9fa914b8d0780cba03f962c38cc22092f1f2d73d24a8b2f6: Status 404 returned error can't find the container with id 98d11bcce40f06aa9fa914b8d0780cba03f962c38cc22092f1f2d73d24a8b2f6 Nov 24 09:14:10 crc kubenswrapper[4719]: I1124 09:14:10.137561 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8dff5214-54a8-41e2-9e2d-d1e491ba2565","Type":"ContainerStarted","Data":"98d11bcce40f06aa9fa914b8d0780cba03f962c38cc22092f1f2d73d24a8b2f6"} Nov 24 09:14:10 crc kubenswrapper[4719]: I1124 09:14:10.289634 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 09:14:10 crc kubenswrapper[4719]: W1124 09:14:10.310406 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6229cd6f_c2de_47c4_9edf_99ebeddaf05b.slice/crio-37008a238a1f585f27695662e060832d54e23523de55e3b945d45bb8f80f906b WatchSource:0}: Error finding container 37008a238a1f585f27695662e060832d54e23523de55e3b945d45bb8f80f906b: Status 404 returned error can't find the container with id 37008a238a1f585f27695662e060832d54e23523de55e3b945d45bb8f80f906b Nov 24 09:14:10 crc kubenswrapper[4719]: I1124 09:14:10.535311 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16402f86-bc6a-4127-8e64-e9eb25435527" path="/var/lib/kubelet/pods/16402f86-bc6a-4127-8e64-e9eb25435527/volumes" Nov 24 09:14:10 crc kubenswrapper[4719]: I1124 09:14:10.537014 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91c47491-6b0b-466c-adb6-90f4a4ca9728" path="/var/lib/kubelet/pods/91c47491-6b0b-466c-adb6-90f4a4ca9728/volumes" Nov 24 09:14:11 crc kubenswrapper[4719]: I1124 09:14:11.150547 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6229cd6f-c2de-47c4-9edf-99ebeddaf05b","Type":"ContainerStarted","Data":"d06d9d2ccadfb70e32abefe5ed62f4a01d2945d10984e20872723f18386f3564"} Nov 24 09:14:11 crc kubenswrapper[4719]: I1124 09:14:11.150596 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6229cd6f-c2de-47c4-9edf-99ebeddaf05b","Type":"ContainerStarted","Data":"37008a238a1f585f27695662e060832d54e23523de55e3b945d45bb8f80f906b"} Nov 24 09:14:11 crc kubenswrapper[4719]: I1124 09:14:11.163144 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8dff5214-54a8-41e2-9e2d-d1e491ba2565","Type":"ContainerStarted","Data":"dc311935100b7753c738cbcf25d7264a9d76df79dcdc37e74d15e3f703f7f2aa"} Nov 24 09:14:11 crc kubenswrapper[4719]: I1124 09:14:11.163194 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8dff5214-54a8-41e2-9e2d-d1e491ba2565","Type":"ContainerStarted","Data":"a9e12b73e855495cd7d2a9c16b6a20ec8a84b18dbc400875cddd4bd8f1693911"} Nov 24 09:14:11 crc kubenswrapper[4719]: I1124 09:14:11.171422 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.171398797 podStartE2EDuration="2.171398797s" podCreationTimestamp="2025-11-24 09:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:14:11.169108261 +0000 UTC m=+1227.500381543" watchObservedRunningTime="2025-11-24 09:14:11.171398797 +0000 UTC m=+1227.502672059" Nov 24 09:14:11 crc kubenswrapper[4719]: I1124 09:14:11.197167 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.197150529 podStartE2EDuration="2.197150529s" podCreationTimestamp="2025-11-24 09:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:14:11.189112787 +0000 UTC m=+1227.520386050" watchObservedRunningTime="2025-11-24 09:14:11.197150529 +0000 UTC m=+1227.528423781" Nov 24 09:14:11 crc kubenswrapper[4719]: I1124 09:14:11.331755 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 09:14:11 crc kubenswrapper[4719]: I1124 09:14:11.332045 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 09:14:11 crc kubenswrapper[4719]: I1124 09:14:11.333140 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 09:14:11 crc kubenswrapper[4719]: I1124 09:14:11.338437 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.175172 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.180471 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.391056 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-7nc5m"] Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.392774 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.414155 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-7nc5m"] Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.504212 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.504295 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pwd7\" (UniqueName: \"kubernetes.io/projected/fd1ef8b1-96f2-488a-aa4d-de553fa73425-kube-api-access-8pwd7\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.504357 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.504558 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-config\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.504717 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.605779 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-config\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.605860 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.605904 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.605923 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pwd7\" (UniqueName: \"kubernetes.io/projected/fd1ef8b1-96f2-488a-aa4d-de553fa73425-kube-api-access-8pwd7\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.606733 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-config\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.606733 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.606752 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.606820 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.607481 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.631883 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pwd7\" (UniqueName: \"kubernetes.io/projected/fd1ef8b1-96f2-488a-aa4d-de553fa73425-kube-api-access-8pwd7\") pod \"dnsmasq-dns-68d4b6d797-7nc5m\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:12 crc kubenswrapper[4719]: I1124 09:14:12.727687 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:13 crc kubenswrapper[4719]: I1124 09:14:13.238218 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-7nc5m"] Nov 24 09:14:13 crc kubenswrapper[4719]: I1124 09:14:13.750988 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:14:13 crc kubenswrapper[4719]: I1124 09:14:13.752617 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="ceilometer-central-agent" containerID="cri-o://00d0f4a33be884f06a8680781d9aa86853777026adc2f235b12e82b0f4a0683e" gracePeriod=30 Nov 24 09:14:13 crc kubenswrapper[4719]: I1124 09:14:13.753642 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="proxy-httpd" containerID="cri-o://9fb8dfc84cd5360d65433f0cacc62fec64f699d0f9476caf55c74352dbe29f1a" gracePeriod=30 Nov 24 09:14:13 crc kubenswrapper[4719]: I1124 09:14:13.753716 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="sg-core" containerID="cri-o://a362a878719f6e16dc1f16b17e2b5402045119d21de75cc55212a5e6d748ff84" gracePeriod=30 Nov 24 09:14:13 crc kubenswrapper[4719]: I1124 09:14:13.753767 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="ceilometer-notification-agent" containerID="cri-o://3f8129f4b6f4ef204f37972d3c87a45adbfdcd72215150e6f25743b21a45e4d3" gracePeriod=30 Nov 24 09:14:13 crc kubenswrapper[4719]: I1124 09:14:13.778925 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.176:3000/\": read tcp 10.217.0.2:50550->10.217.0.176:3000: read: connection reset by peer" Nov 24 09:14:13 crc kubenswrapper[4719]: E1124 09:14:13.842269 4719 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod913a8e91_83da_4a4e_8732_1504279e5649.slice/crio-conmon-a362a878719f6e16dc1f16b17e2b5402045119d21de75cc55212a5e6d748ff84.scope\": RecentStats: unable to find data in memory cache]" Nov 24 09:14:14 crc kubenswrapper[4719]: I1124 09:14:14.195527 4719 generic.go:334] "Generic (PLEG): container finished" podID="913a8e91-83da-4a4e-8732-1504279e5649" containerID="9fb8dfc84cd5360d65433f0cacc62fec64f699d0f9476caf55c74352dbe29f1a" exitCode=0 Nov 24 09:14:14 crc kubenswrapper[4719]: I1124 09:14:14.195566 4719 generic.go:334] "Generic (PLEG): container finished" podID="913a8e91-83da-4a4e-8732-1504279e5649" containerID="a362a878719f6e16dc1f16b17e2b5402045119d21de75cc55212a5e6d748ff84" exitCode=2 Nov 24 09:14:14 crc kubenswrapper[4719]: I1124 09:14:14.195578 4719 generic.go:334] "Generic (PLEG): container finished" podID="913a8e91-83da-4a4e-8732-1504279e5649" containerID="00d0f4a33be884f06a8680781d9aa86853777026adc2f235b12e82b0f4a0683e" exitCode=0 Nov 24 09:14:14 crc kubenswrapper[4719]: I1124 09:14:14.195644 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913a8e91-83da-4a4e-8732-1504279e5649","Type":"ContainerDied","Data":"9fb8dfc84cd5360d65433f0cacc62fec64f699d0f9476caf55c74352dbe29f1a"} Nov 24 09:14:14 crc kubenswrapper[4719]: I1124 09:14:14.195675 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913a8e91-83da-4a4e-8732-1504279e5649","Type":"ContainerDied","Data":"a362a878719f6e16dc1f16b17e2b5402045119d21de75cc55212a5e6d748ff84"} Nov 24 09:14:14 crc kubenswrapper[4719]: I1124 09:14:14.195690 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913a8e91-83da-4a4e-8732-1504279e5649","Type":"ContainerDied","Data":"00d0f4a33be884f06a8680781d9aa86853777026adc2f235b12e82b0f4a0683e"} Nov 24 09:14:14 crc kubenswrapper[4719]: I1124 09:14:14.197798 4719 generic.go:334] "Generic (PLEG): container finished" podID="fd1ef8b1-96f2-488a-aa4d-de553fa73425" containerID="0955e9090d494ea81143f7eab3e78019a9a733e2999196e5a6efb61fec9ff4e0" exitCode=0 Nov 24 09:14:14 crc kubenswrapper[4719]: I1124 09:14:14.199249 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" event={"ID":"fd1ef8b1-96f2-488a-aa4d-de553fa73425","Type":"ContainerDied","Data":"0955e9090d494ea81143f7eab3e78019a9a733e2999196e5a6efb61fec9ff4e0"} Nov 24 09:14:14 crc kubenswrapper[4719]: I1124 09:14:14.199289 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" event={"ID":"fd1ef8b1-96f2-488a-aa4d-de553fa73425","Type":"ContainerStarted","Data":"3785f3270577fcbc8845ef71dcfe44659f86f850093ed63968ca4621ab1cb9f3"} Nov 24 09:14:14 crc kubenswrapper[4719]: I1124 09:14:14.594880 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 09:14:14 crc kubenswrapper[4719]: I1124 09:14:14.595239 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 09:14:14 crc kubenswrapper[4719]: I1124 09:14:14.861457 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:15 crc kubenswrapper[4719]: I1124 09:14:15.208824 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.176:3000/\": dial tcp 10.217.0.176:3000: connect: connection refused" Nov 24 09:14:15 crc kubenswrapper[4719]: I1124 09:14:15.212985 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" event={"ID":"fd1ef8b1-96f2-488a-aa4d-de553fa73425","Type":"ContainerStarted","Data":"fae42a51fc3c74ecfbe7893972022bc7fb115d666cb1d439138b2b2ff744b504"} Nov 24 09:14:15 crc kubenswrapper[4719]: I1124 09:14:15.213327 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:15 crc kubenswrapper[4719]: I1124 09:14:15.246001 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" podStartSLOduration=3.245984748 podStartE2EDuration="3.245984748s" podCreationTimestamp="2025-11-24 09:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:14:15.236178665 +0000 UTC m=+1231.567451937" watchObservedRunningTime="2025-11-24 09:14:15.245984748 +0000 UTC m=+1231.577258000" Nov 24 09:14:15 crc kubenswrapper[4719]: I1124 09:14:15.715756 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:14:15 crc kubenswrapper[4719]: I1124 09:14:15.715944 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d545ecc5-413b-4c56-99c5-7b709da09b51" containerName="nova-api-log" containerID="cri-o://b0ea798408b88995b596effd4c5987ed9b8ef43ae39a446cbae0909365064051" gracePeriod=30 Nov 24 09:14:15 crc kubenswrapper[4719]: I1124 09:14:15.716298 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d545ecc5-413b-4c56-99c5-7b709da09b51" containerName="nova-api-api" containerID="cri-o://12f52c576432d36b008139cbd30750731a31b8112afe813b2c99b6fb70dc080c" gracePeriod=30 Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.146105 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.204287 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-sg-core-conf-yaml\") pod \"913a8e91-83da-4a4e-8732-1504279e5649\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.204391 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-ceilometer-tls-certs\") pod \"913a8e91-83da-4a4e-8732-1504279e5649\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.204416 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-config-data\") pod \"913a8e91-83da-4a4e-8732-1504279e5649\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.204440 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-combined-ca-bundle\") pod \"913a8e91-83da-4a4e-8732-1504279e5649\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.204471 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh5j4\" (UniqueName: \"kubernetes.io/projected/913a8e91-83da-4a4e-8732-1504279e5649-kube-api-access-vh5j4\") pod \"913a8e91-83da-4a4e-8732-1504279e5649\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.204493 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913a8e91-83da-4a4e-8732-1504279e5649-log-httpd\") pod \"913a8e91-83da-4a4e-8732-1504279e5649\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.204534 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-scripts\") pod \"913a8e91-83da-4a4e-8732-1504279e5649\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.204591 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913a8e91-83da-4a4e-8732-1504279e5649-run-httpd\") pod \"913a8e91-83da-4a4e-8732-1504279e5649\" (UID: \"913a8e91-83da-4a4e-8732-1504279e5649\") " Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.205239 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/913a8e91-83da-4a4e-8732-1504279e5649-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "913a8e91-83da-4a4e-8732-1504279e5649" (UID: "913a8e91-83da-4a4e-8732-1504279e5649"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.205994 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/913a8e91-83da-4a4e-8732-1504279e5649-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "913a8e91-83da-4a4e-8732-1504279e5649" (UID: "913a8e91-83da-4a4e-8732-1504279e5649"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.240210 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-scripts" (OuterVolumeSpecName: "scripts") pod "913a8e91-83da-4a4e-8732-1504279e5649" (UID: "913a8e91-83da-4a4e-8732-1504279e5649"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.240342 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/913a8e91-83da-4a4e-8732-1504279e5649-kube-api-access-vh5j4" (OuterVolumeSpecName: "kube-api-access-vh5j4") pod "913a8e91-83da-4a4e-8732-1504279e5649" (UID: "913a8e91-83da-4a4e-8732-1504279e5649"). InnerVolumeSpecName "kube-api-access-vh5j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.243288 4719 generic.go:334] "Generic (PLEG): container finished" podID="d545ecc5-413b-4c56-99c5-7b709da09b51" containerID="b0ea798408b88995b596effd4c5987ed9b8ef43ae39a446cbae0909365064051" exitCode=143 Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.243348 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d545ecc5-413b-4c56-99c5-7b709da09b51","Type":"ContainerDied","Data":"b0ea798408b88995b596effd4c5987ed9b8ef43ae39a446cbae0909365064051"} Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.256342 4719 generic.go:334] "Generic (PLEG): container finished" podID="913a8e91-83da-4a4e-8732-1504279e5649" containerID="3f8129f4b6f4ef204f37972d3c87a45adbfdcd72215150e6f25743b21a45e4d3" exitCode=0 Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.256427 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913a8e91-83da-4a4e-8732-1504279e5649","Type":"ContainerDied","Data":"3f8129f4b6f4ef204f37972d3c87a45adbfdcd72215150e6f25743b21a45e4d3"} Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.256482 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"913a8e91-83da-4a4e-8732-1504279e5649","Type":"ContainerDied","Data":"9711b401f013d032bf038de124fa6763de8d9a184dd80ef2e811de2d7a7a3046"} Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.256500 4719 scope.go:117] "RemoveContainer" containerID="9fb8dfc84cd5360d65433f0cacc62fec64f699d0f9476caf55c74352dbe29f1a" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.256521 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.264184 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "913a8e91-83da-4a4e-8732-1504279e5649" (UID: "913a8e91-83da-4a4e-8732-1504279e5649"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.282629 4719 scope.go:117] "RemoveContainer" containerID="a362a878719f6e16dc1f16b17e2b5402045119d21de75cc55212a5e6d748ff84" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.306835 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh5j4\" (UniqueName: \"kubernetes.io/projected/913a8e91-83da-4a4e-8732-1504279e5649-kube-api-access-vh5j4\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.306875 4719 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913a8e91-83da-4a4e-8732-1504279e5649-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.306887 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.306895 4719 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/913a8e91-83da-4a4e-8732-1504279e5649-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.306903 4719 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.320158 4719 scope.go:117] "RemoveContainer" containerID="3f8129f4b6f4ef204f37972d3c87a45adbfdcd72215150e6f25743b21a45e4d3" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.323616 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "913a8e91-83da-4a4e-8732-1504279e5649" (UID: "913a8e91-83da-4a4e-8732-1504279e5649"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.325412 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "913a8e91-83da-4a4e-8732-1504279e5649" (UID: "913a8e91-83da-4a4e-8732-1504279e5649"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.342226 4719 scope.go:117] "RemoveContainer" containerID="00d0f4a33be884f06a8680781d9aa86853777026adc2f235b12e82b0f4a0683e" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.360333 4719 scope.go:117] "RemoveContainer" containerID="9fb8dfc84cd5360d65433f0cacc62fec64f699d0f9476caf55c74352dbe29f1a" Nov 24 09:14:16 crc kubenswrapper[4719]: E1124 09:14:16.361185 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb8dfc84cd5360d65433f0cacc62fec64f699d0f9476caf55c74352dbe29f1a\": container with ID starting with 9fb8dfc84cd5360d65433f0cacc62fec64f699d0f9476caf55c74352dbe29f1a not found: ID does not exist" containerID="9fb8dfc84cd5360d65433f0cacc62fec64f699d0f9476caf55c74352dbe29f1a" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.361251 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb8dfc84cd5360d65433f0cacc62fec64f699d0f9476caf55c74352dbe29f1a"} err="failed to get container status \"9fb8dfc84cd5360d65433f0cacc62fec64f699d0f9476caf55c74352dbe29f1a\": rpc error: code = NotFound desc = could not find container \"9fb8dfc84cd5360d65433f0cacc62fec64f699d0f9476caf55c74352dbe29f1a\": container with ID starting with 9fb8dfc84cd5360d65433f0cacc62fec64f699d0f9476caf55c74352dbe29f1a not found: ID does not exist" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.361278 4719 scope.go:117] "RemoveContainer" containerID="a362a878719f6e16dc1f16b17e2b5402045119d21de75cc55212a5e6d748ff84" Nov 24 09:14:16 crc kubenswrapper[4719]: E1124 09:14:16.361762 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a362a878719f6e16dc1f16b17e2b5402045119d21de75cc55212a5e6d748ff84\": container with ID starting with a362a878719f6e16dc1f16b17e2b5402045119d21de75cc55212a5e6d748ff84 not found: ID does not exist" containerID="a362a878719f6e16dc1f16b17e2b5402045119d21de75cc55212a5e6d748ff84" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.361785 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a362a878719f6e16dc1f16b17e2b5402045119d21de75cc55212a5e6d748ff84"} err="failed to get container status \"a362a878719f6e16dc1f16b17e2b5402045119d21de75cc55212a5e6d748ff84\": rpc error: code = NotFound desc = could not find container \"a362a878719f6e16dc1f16b17e2b5402045119d21de75cc55212a5e6d748ff84\": container with ID starting with a362a878719f6e16dc1f16b17e2b5402045119d21de75cc55212a5e6d748ff84 not found: ID does not exist" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.362022 4719 scope.go:117] "RemoveContainer" containerID="3f8129f4b6f4ef204f37972d3c87a45adbfdcd72215150e6f25743b21a45e4d3" Nov 24 09:14:16 crc kubenswrapper[4719]: E1124 09:14:16.363726 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f8129f4b6f4ef204f37972d3c87a45adbfdcd72215150e6f25743b21a45e4d3\": container with ID starting with 3f8129f4b6f4ef204f37972d3c87a45adbfdcd72215150e6f25743b21a45e4d3 not found: ID does not exist" containerID="3f8129f4b6f4ef204f37972d3c87a45adbfdcd72215150e6f25743b21a45e4d3" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.363787 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f8129f4b6f4ef204f37972d3c87a45adbfdcd72215150e6f25743b21a45e4d3"} err="failed to get container status \"3f8129f4b6f4ef204f37972d3c87a45adbfdcd72215150e6f25743b21a45e4d3\": rpc error: code = NotFound desc = could not find container \"3f8129f4b6f4ef204f37972d3c87a45adbfdcd72215150e6f25743b21a45e4d3\": container with ID starting with 3f8129f4b6f4ef204f37972d3c87a45adbfdcd72215150e6f25743b21a45e4d3 not found: ID does not exist" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.363812 4719 scope.go:117] "RemoveContainer" containerID="00d0f4a33be884f06a8680781d9aa86853777026adc2f235b12e82b0f4a0683e" Nov 24 09:14:16 crc kubenswrapper[4719]: E1124 09:14:16.364156 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00d0f4a33be884f06a8680781d9aa86853777026adc2f235b12e82b0f4a0683e\": container with ID starting with 00d0f4a33be884f06a8680781d9aa86853777026adc2f235b12e82b0f4a0683e not found: ID does not exist" containerID="00d0f4a33be884f06a8680781d9aa86853777026adc2f235b12e82b0f4a0683e" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.364216 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d0f4a33be884f06a8680781d9aa86853777026adc2f235b12e82b0f4a0683e"} err="failed to get container status \"00d0f4a33be884f06a8680781d9aa86853777026adc2f235b12e82b0f4a0683e\": rpc error: code = NotFound desc = could not find container \"00d0f4a33be884f06a8680781d9aa86853777026adc2f235b12e82b0f4a0683e\": container with ID starting with 00d0f4a33be884f06a8680781d9aa86853777026adc2f235b12e82b0f4a0683e not found: ID does not exist" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.374082 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-config-data" (OuterVolumeSpecName: "config-data") pod "913a8e91-83da-4a4e-8732-1504279e5649" (UID: "913a8e91-83da-4a4e-8732-1504279e5649"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.407998 4719 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.408506 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.408565 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/913a8e91-83da-4a4e-8732-1504279e5649-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.583105 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.597828 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.606705 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:14:16 crc kubenswrapper[4719]: E1124 09:14:16.607168 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="proxy-httpd" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.607188 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="proxy-httpd" Nov 24 09:14:16 crc kubenswrapper[4719]: E1124 09:14:16.607209 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="ceilometer-central-agent" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.607217 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="ceilometer-central-agent" Nov 24 09:14:16 crc kubenswrapper[4719]: E1124 09:14:16.607227 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="ceilometer-notification-agent" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.607234 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="ceilometer-notification-agent" Nov 24 09:14:16 crc kubenswrapper[4719]: E1124 09:14:16.607261 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="sg-core" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.607269 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="sg-core" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.607472 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="ceilometer-notification-agent" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.607494 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="sg-core" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.607509 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="proxy-httpd" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.607530 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="913a8e91-83da-4a4e-8732-1504279e5649" containerName="ceilometer-central-agent" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.609402 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.613548 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.613597 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.613652 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.632885 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.713342 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.713393 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.713433 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-log-httpd\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.713458 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-run-httpd\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.713511 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-scripts\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.713528 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.713569 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn6bh\" (UniqueName: \"kubernetes.io/projected/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-kube-api-access-pn6bh\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.713649 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-config-data\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.815314 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.816161 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.816263 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-log-httpd\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.816335 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-run-httpd\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.816442 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-scripts\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.816511 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.816591 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn6bh\" (UniqueName: \"kubernetes.io/projected/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-kube-api-access-pn6bh\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.816700 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-config-data\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.816876 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-log-httpd\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.817157 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-run-httpd\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.820504 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.820667 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.820682 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.821349 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-scripts\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.828721 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-config-data\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.852951 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn6bh\" (UniqueName: \"kubernetes.io/projected/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-kube-api-access-pn6bh\") pod \"ceilometer-0\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " pod="openstack/ceilometer-0" Nov 24 09:14:16 crc kubenswrapper[4719]: I1124 09:14:16.925709 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:14:17 crc kubenswrapper[4719]: I1124 09:14:17.424801 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:14:17 crc kubenswrapper[4719]: W1124 09:14:17.433283 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd11b298c_1e1b_4501_b8ec_fdf3e2c77254.slice/crio-5269c7b5714395aa319b838c14af8dff56b570c20ab34eb588a846e78dd6d9d6 WatchSource:0}: Error finding container 5269c7b5714395aa319b838c14af8dff56b570c20ab34eb588a846e78dd6d9d6: Status 404 returned error can't find the container with id 5269c7b5714395aa319b838c14af8dff56b570c20ab34eb588a846e78dd6d9d6 Nov 24 09:14:17 crc kubenswrapper[4719]: I1124 09:14:17.435844 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:14:17 crc kubenswrapper[4719]: I1124 09:14:17.776145 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:14:18 crc kubenswrapper[4719]: I1124 09:14:18.273378 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d11b298c-1e1b-4501-b8ec-fdf3e2c77254","Type":"ContainerStarted","Data":"5269c7b5714395aa319b838c14af8dff56b570c20ab34eb588a846e78dd6d9d6"} Nov 24 09:14:18 crc kubenswrapper[4719]: I1124 09:14:18.530885 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="913a8e91-83da-4a4e-8732-1504279e5649" path="/var/lib/kubelet/pods/913a8e91-83da-4a4e-8732-1504279e5649/volumes" Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.303852 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d11b298c-1e1b-4501-b8ec-fdf3e2c77254","Type":"ContainerStarted","Data":"159aecec2c7c260ffe4959f8b626e510dd5e6ec7340352a9f7d4c4e85f252b2b"} Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.304183 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d11b298c-1e1b-4501-b8ec-fdf3e2c77254","Type":"ContainerStarted","Data":"5159834a7eeea2e2deed0ddbc09521c15bcc861c8a6b6817698ef52167b224f1"} Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.305877 4719 generic.go:334] "Generic (PLEG): container finished" podID="d545ecc5-413b-4c56-99c5-7b709da09b51" containerID="12f52c576432d36b008139cbd30750731a31b8112afe813b2c99b6fb70dc080c" exitCode=0 Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.305907 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d545ecc5-413b-4c56-99c5-7b709da09b51","Type":"ContainerDied","Data":"12f52c576432d36b008139cbd30750731a31b8112afe813b2c99b6fb70dc080c"} Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.305924 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d545ecc5-413b-4c56-99c5-7b709da09b51","Type":"ContainerDied","Data":"319647b31a0b56ed2a7bd97b701f09f1c88bc5b4ad2e0d62d350fd78a785454d"} Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.305936 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319647b31a0b56ed2a7bd97b701f09f1c88bc5b4ad2e0d62d350fd78a785454d" Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.341004 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.379132 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdm4d\" (UniqueName: \"kubernetes.io/projected/d545ecc5-413b-4c56-99c5-7b709da09b51-kube-api-access-zdm4d\") pod \"d545ecc5-413b-4c56-99c5-7b709da09b51\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.379246 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d545ecc5-413b-4c56-99c5-7b709da09b51-logs\") pod \"d545ecc5-413b-4c56-99c5-7b709da09b51\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.379352 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d545ecc5-413b-4c56-99c5-7b709da09b51-combined-ca-bundle\") pod \"d545ecc5-413b-4c56-99c5-7b709da09b51\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.379435 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d545ecc5-413b-4c56-99c5-7b709da09b51-config-data\") pod \"d545ecc5-413b-4c56-99c5-7b709da09b51\" (UID: \"d545ecc5-413b-4c56-99c5-7b709da09b51\") " Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.380279 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d545ecc5-413b-4c56-99c5-7b709da09b51-logs" (OuterVolumeSpecName: "logs") pod "d545ecc5-413b-4c56-99c5-7b709da09b51" (UID: "d545ecc5-413b-4c56-99c5-7b709da09b51"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.386171 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d545ecc5-413b-4c56-99c5-7b709da09b51-kube-api-access-zdm4d" (OuterVolumeSpecName: "kube-api-access-zdm4d") pod "d545ecc5-413b-4c56-99c5-7b709da09b51" (UID: "d545ecc5-413b-4c56-99c5-7b709da09b51"). InnerVolumeSpecName "kube-api-access-zdm4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.440105 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d545ecc5-413b-4c56-99c5-7b709da09b51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d545ecc5-413b-4c56-99c5-7b709da09b51" (UID: "d545ecc5-413b-4c56-99c5-7b709da09b51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.442154 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d545ecc5-413b-4c56-99c5-7b709da09b51-config-data" (OuterVolumeSpecName: "config-data") pod "d545ecc5-413b-4c56-99c5-7b709da09b51" (UID: "d545ecc5-413b-4c56-99c5-7b709da09b51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.481104 4719 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d545ecc5-413b-4c56-99c5-7b709da09b51-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.481147 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d545ecc5-413b-4c56-99c5-7b709da09b51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.481163 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d545ecc5-413b-4c56-99c5-7b709da09b51-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.481176 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdm4d\" (UniqueName: \"kubernetes.io/projected/d545ecc5-413b-4c56-99c5-7b709da09b51-kube-api-access-zdm4d\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.594653 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.595064 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.860906 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:19 crc kubenswrapper[4719]: I1124 09:14:19.888449 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.322482 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d11b298c-1e1b-4501-b8ec-fdf3e2c77254","Type":"ContainerStarted","Data":"5dff983b0eb6808ab3bc4493eb064e9e9d1dcac26865840ee3d6418a7d3306de"} Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.322663 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.358027 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.366336 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.377317 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.392903 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 09:14:20 crc kubenswrapper[4719]: E1124 09:14:20.393297 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d545ecc5-413b-4c56-99c5-7b709da09b51" containerName="nova-api-api" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.393314 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d545ecc5-413b-4c56-99c5-7b709da09b51" containerName="nova-api-api" Nov 24 09:14:20 crc kubenswrapper[4719]: E1124 09:14:20.393331 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d545ecc5-413b-4c56-99c5-7b709da09b51" containerName="nova-api-log" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.393338 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d545ecc5-413b-4c56-99c5-7b709da09b51" containerName="nova-api-log" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.393506 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="d545ecc5-413b-4c56-99c5-7b709da09b51" containerName="nova-api-log" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.393525 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="d545ecc5-413b-4c56-99c5-7b709da09b51" containerName="nova-api-api" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.394937 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.397339 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.397994 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.404315 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.471471 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.497399 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-config-data\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.497478 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82h4l\" (UniqueName: \"kubernetes.io/projected/67ec838d-9722-4e6e-9a57-08ec9f1acabe-kube-api-access-82h4l\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.497499 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.497579 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67ec838d-9722-4e6e-9a57-08ec9f1acabe-logs\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.497614 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-public-tls-certs\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.497645 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-internal-tls-certs\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.533452 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d545ecc5-413b-4c56-99c5-7b709da09b51" path="/var/lib/kubelet/pods/d545ecc5-413b-4c56-99c5-7b709da09b51/volumes" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.599420 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-public-tls-certs\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.599464 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-internal-tls-certs\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.599525 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-config-data\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.599593 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82h4l\" (UniqueName: \"kubernetes.io/projected/67ec838d-9722-4e6e-9a57-08ec9f1acabe-kube-api-access-82h4l\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.599613 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.599753 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67ec838d-9722-4e6e-9a57-08ec9f1acabe-logs\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.600177 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67ec838d-9722-4e6e-9a57-08ec9f1acabe-logs\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.633645 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.643482 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.181:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.643583 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.181:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.648915 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-internal-tls-certs\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.649449 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-config-data\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.660531 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-public-tls-certs\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.675821 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82h4l\" (UniqueName: \"kubernetes.io/projected/67ec838d-9722-4e6e-9a57-08ec9f1acabe-kube-api-access-82h4l\") pod \"nova-api-0\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.720506 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.748607 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-hsp2d"] Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.750092 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hsp2d"] Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.750216 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.753482 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.754935 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.806563 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hsp2d\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.806618 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-scripts\") pod \"nova-cell1-cell-mapping-hsp2d\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.806649 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx5zx\" (UniqueName: \"kubernetes.io/projected/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-kube-api-access-wx5zx\") pod \"nova-cell1-cell-mapping-hsp2d\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.806675 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-config-data\") pod \"nova-cell1-cell-mapping-hsp2d\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.911295 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-scripts\") pod \"nova-cell1-cell-mapping-hsp2d\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.911613 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hsp2d\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.911660 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx5zx\" (UniqueName: \"kubernetes.io/projected/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-kube-api-access-wx5zx\") pod \"nova-cell1-cell-mapping-hsp2d\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.911694 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-config-data\") pod \"nova-cell1-cell-mapping-hsp2d\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.915516 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-config-data\") pod \"nova-cell1-cell-mapping-hsp2d\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.937504 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-scripts\") pod \"nova-cell1-cell-mapping-hsp2d\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.949255 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx5zx\" (UniqueName: \"kubernetes.io/projected/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-kube-api-access-wx5zx\") pod \"nova-cell1-cell-mapping-hsp2d\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:20 crc kubenswrapper[4719]: I1124 09:14:20.953404 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hsp2d\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:21 crc kubenswrapper[4719]: I1124 09:14:21.064991 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:21 crc kubenswrapper[4719]: I1124 09:14:21.309987 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:14:21 crc kubenswrapper[4719]: I1124 09:14:21.340296 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d11b298c-1e1b-4501-b8ec-fdf3e2c77254","Type":"ContainerStarted","Data":"bb78166c61d9d949a394bf43ef5a970c3200d43ce137c3a492a0baeebfeae44f"} Nov 24 09:14:21 crc kubenswrapper[4719]: I1124 09:14:21.341882 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="ceilometer-central-agent" containerID="cri-o://5159834a7eeea2e2deed0ddbc09521c15bcc861c8a6b6817698ef52167b224f1" gracePeriod=30 Nov 24 09:14:21 crc kubenswrapper[4719]: I1124 09:14:21.342012 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="proxy-httpd" containerID="cri-o://bb78166c61d9d949a394bf43ef5a970c3200d43ce137c3a492a0baeebfeae44f" gracePeriod=30 Nov 24 09:14:21 crc kubenswrapper[4719]: I1124 09:14:21.342082 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="sg-core" containerID="cri-o://5dff983b0eb6808ab3bc4493eb064e9e9d1dcac26865840ee3d6418a7d3306de" gracePeriod=30 Nov 24 09:14:21 crc kubenswrapper[4719]: I1124 09:14:21.342129 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="ceilometer-notification-agent" containerID="cri-o://159aecec2c7c260ffe4959f8b626e510dd5e6ec7340352a9f7d4c4e85f252b2b" gracePeriod=30 Nov 24 09:14:21 crc kubenswrapper[4719]: I1124 09:14:21.369848 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"67ec838d-9722-4e6e-9a57-08ec9f1acabe","Type":"ContainerStarted","Data":"c4a34f8282e83980240c851e223e8a0d27f4ed5ebd7518148358cc5aeab57a86"} Nov 24 09:14:21 crc kubenswrapper[4719]: I1124 09:14:21.381608 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.998457903 podStartE2EDuration="5.381591176s" podCreationTimestamp="2025-11-24 09:14:16 +0000 UTC" firstStartedPulling="2025-11-24 09:14:17.435587989 +0000 UTC m=+1233.766861241" lastFinishedPulling="2025-11-24 09:14:20.818721262 +0000 UTC m=+1237.149994514" observedRunningTime="2025-11-24 09:14:21.377522219 +0000 UTC m=+1237.708795491" watchObservedRunningTime="2025-11-24 09:14:21.381591176 +0000 UTC m=+1237.712864438" Nov 24 09:14:21 crc kubenswrapper[4719]: I1124 09:14:21.559754 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hsp2d"] Nov 24 09:14:22 crc kubenswrapper[4719]: I1124 09:14:22.384847 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hsp2d" event={"ID":"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e","Type":"ContainerStarted","Data":"cbb213cd89c4180e8c8588226c99002e690f2edf775ee64ddc4e71361d03a5b8"} Nov 24 09:14:22 crc kubenswrapper[4719]: I1124 09:14:22.384934 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hsp2d" event={"ID":"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e","Type":"ContainerStarted","Data":"3947d50a35cac1809af3a3ccfab320cd4b326988b9ce6f36c564ae620f4c0c66"} Nov 24 09:14:22 crc kubenswrapper[4719]: I1124 09:14:22.389179 4719 generic.go:334] "Generic (PLEG): container finished" podID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerID="5dff983b0eb6808ab3bc4493eb064e9e9d1dcac26865840ee3d6418a7d3306de" exitCode=2 Nov 24 09:14:22 crc kubenswrapper[4719]: I1124 09:14:22.389216 4719 generic.go:334] "Generic (PLEG): container finished" podID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerID="159aecec2c7c260ffe4959f8b626e510dd5e6ec7340352a9f7d4c4e85f252b2b" exitCode=0 Nov 24 09:14:22 crc kubenswrapper[4719]: I1124 09:14:22.389276 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d11b298c-1e1b-4501-b8ec-fdf3e2c77254","Type":"ContainerDied","Data":"5dff983b0eb6808ab3bc4493eb064e9e9d1dcac26865840ee3d6418a7d3306de"} Nov 24 09:14:22 crc kubenswrapper[4719]: I1124 09:14:22.389312 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d11b298c-1e1b-4501-b8ec-fdf3e2c77254","Type":"ContainerDied","Data":"159aecec2c7c260ffe4959f8b626e510dd5e6ec7340352a9f7d4c4e85f252b2b"} Nov 24 09:14:22 crc kubenswrapper[4719]: I1124 09:14:22.390993 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"67ec838d-9722-4e6e-9a57-08ec9f1acabe","Type":"ContainerStarted","Data":"0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b"} Nov 24 09:14:22 crc kubenswrapper[4719]: I1124 09:14:22.391024 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"67ec838d-9722-4e6e-9a57-08ec9f1acabe","Type":"ContainerStarted","Data":"bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9"} Nov 24 09:14:22 crc kubenswrapper[4719]: I1124 09:14:22.403616 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-hsp2d" podStartSLOduration=2.4035960960000002 podStartE2EDuration="2.403596096s" podCreationTimestamp="2025-11-24 09:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:14:22.401218017 +0000 UTC m=+1238.732491309" watchObservedRunningTime="2025-11-24 09:14:22.403596096 +0000 UTC m=+1238.734869368" Nov 24 09:14:22 crc kubenswrapper[4719]: I1124 09:14:22.424561 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.4245425689999998 podStartE2EDuration="2.424542569s" podCreationTimestamp="2025-11-24 09:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:14:22.423730796 +0000 UTC m=+1238.755004048" watchObservedRunningTime="2025-11-24 09:14:22.424542569 +0000 UTC m=+1238.755815821" Nov 24 09:14:22 crc kubenswrapper[4719]: I1124 09:14:22.729369 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:14:22 crc kubenswrapper[4719]: I1124 09:14:22.791364 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-f6wxl"] Nov 24 09:14:22 crc kubenswrapper[4719]: I1124 09:14:22.791588 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" podUID="6c4416ac-0d2d-4fce-a5cf-51baceca7650" containerName="dnsmasq-dns" containerID="cri-o://e0f48ccc7778a8699ff51bd7929a5ae2f981357ea767120a7189d7835d83c230" gracePeriod=10 Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.377413 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.408976 4719 generic.go:334] "Generic (PLEG): container finished" podID="6c4416ac-0d2d-4fce-a5cf-51baceca7650" containerID="e0f48ccc7778a8699ff51bd7929a5ae2f981357ea767120a7189d7835d83c230" exitCode=0 Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.409023 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.409079 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" event={"ID":"6c4416ac-0d2d-4fce-a5cf-51baceca7650","Type":"ContainerDied","Data":"e0f48ccc7778a8699ff51bd7929a5ae2f981357ea767120a7189d7835d83c230"} Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.409118 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-f6wxl" event={"ID":"6c4416ac-0d2d-4fce-a5cf-51baceca7650","Type":"ContainerDied","Data":"09da56720568784089ce0a9e2c9ee4eb5f1197acea99292fff6674f0cdffe73e"} Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.409142 4719 scope.go:117] "RemoveContainer" containerID="e0f48ccc7778a8699ff51bd7929a5ae2f981357ea767120a7189d7835d83c230" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.440610 4719 scope.go:117] "RemoveContainer" containerID="4460cbbcb82aa131b39556f155b303a05a1ebc028f988d2e9d93ec316163f997" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.468134 4719 scope.go:117] "RemoveContainer" containerID="e0f48ccc7778a8699ff51bd7929a5ae2f981357ea767120a7189d7835d83c230" Nov 24 09:14:23 crc kubenswrapper[4719]: E1124 09:14:23.468716 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0f48ccc7778a8699ff51bd7929a5ae2f981357ea767120a7189d7835d83c230\": container with ID starting with e0f48ccc7778a8699ff51bd7929a5ae2f981357ea767120a7189d7835d83c230 not found: ID does not exist" containerID="e0f48ccc7778a8699ff51bd7929a5ae2f981357ea767120a7189d7835d83c230" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.468755 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0f48ccc7778a8699ff51bd7929a5ae2f981357ea767120a7189d7835d83c230"} err="failed to get container status \"e0f48ccc7778a8699ff51bd7929a5ae2f981357ea767120a7189d7835d83c230\": rpc error: code = NotFound desc = could not find container \"e0f48ccc7778a8699ff51bd7929a5ae2f981357ea767120a7189d7835d83c230\": container with ID starting with e0f48ccc7778a8699ff51bd7929a5ae2f981357ea767120a7189d7835d83c230 not found: ID does not exist" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.468779 4719 scope.go:117] "RemoveContainer" containerID="4460cbbcb82aa131b39556f155b303a05a1ebc028f988d2e9d93ec316163f997" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.469337 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-dns-svc\") pod \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " Nov 24 09:14:23 crc kubenswrapper[4719]: E1124 09:14:23.469446 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4460cbbcb82aa131b39556f155b303a05a1ebc028f988d2e9d93ec316163f997\": container with ID starting with 4460cbbcb82aa131b39556f155b303a05a1ebc028f988d2e9d93ec316163f997 not found: ID does not exist" containerID="4460cbbcb82aa131b39556f155b303a05a1ebc028f988d2e9d93ec316163f997" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.469502 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-ovsdbserver-nb\") pod \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.469493 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4460cbbcb82aa131b39556f155b303a05a1ebc028f988d2e9d93ec316163f997"} err="failed to get container status \"4460cbbcb82aa131b39556f155b303a05a1ebc028f988d2e9d93ec316163f997\": rpc error: code = NotFound desc = could not find container \"4460cbbcb82aa131b39556f155b303a05a1ebc028f988d2e9d93ec316163f997\": container with ID starting with 4460cbbcb82aa131b39556f155b303a05a1ebc028f988d2e9d93ec316163f997 not found: ID does not exist" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.469546 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-ovsdbserver-sb\") pod \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.469617 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-config\") pod \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.469677 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f52sb\" (UniqueName: \"kubernetes.io/projected/6c4416ac-0d2d-4fce-a5cf-51baceca7650-kube-api-access-f52sb\") pod \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\" (UID: \"6c4416ac-0d2d-4fce-a5cf-51baceca7650\") " Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.476368 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4416ac-0d2d-4fce-a5cf-51baceca7650-kube-api-access-f52sb" (OuterVolumeSpecName: "kube-api-access-f52sb") pod "6c4416ac-0d2d-4fce-a5cf-51baceca7650" (UID: "6c4416ac-0d2d-4fce-a5cf-51baceca7650"). InnerVolumeSpecName "kube-api-access-f52sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.551197 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c4416ac-0d2d-4fce-a5cf-51baceca7650" (UID: "6c4416ac-0d2d-4fce-a5cf-51baceca7650"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.557442 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c4416ac-0d2d-4fce-a5cf-51baceca7650" (UID: "6c4416ac-0d2d-4fce-a5cf-51baceca7650"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.558232 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-config" (OuterVolumeSpecName: "config") pod "6c4416ac-0d2d-4fce-a5cf-51baceca7650" (UID: "6c4416ac-0d2d-4fce-a5cf-51baceca7650"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.572996 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.573887 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.573960 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f52sb\" (UniqueName: \"kubernetes.io/projected/6c4416ac-0d2d-4fce-a5cf-51baceca7650-kube-api-access-f52sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.574060 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.592226 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c4416ac-0d2d-4fce-a5cf-51baceca7650" (UID: "6c4416ac-0d2d-4fce-a5cf-51baceca7650"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.675926 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c4416ac-0d2d-4fce-a5cf-51baceca7650-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.750098 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-f6wxl"] Nov 24 09:14:23 crc kubenswrapper[4719]: I1124 09:14:23.759907 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-f6wxl"] Nov 24 09:14:24 crc kubenswrapper[4719]: I1124 09:14:24.532118 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4416ac-0d2d-4fce-a5cf-51baceca7650" path="/var/lib/kubelet/pods/6c4416ac-0d2d-4fce-a5cf-51baceca7650/volumes" Nov 24 09:14:26 crc kubenswrapper[4719]: I1124 09:14:26.449200 4719 generic.go:334] "Generic (PLEG): container finished" podID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerID="5159834a7eeea2e2deed0ddbc09521c15bcc861c8a6b6817698ef52167b224f1" exitCode=0 Nov 24 09:14:26 crc kubenswrapper[4719]: I1124 09:14:26.449271 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d11b298c-1e1b-4501-b8ec-fdf3e2c77254","Type":"ContainerDied","Data":"5159834a7eeea2e2deed0ddbc09521c15bcc861c8a6b6817698ef52167b224f1"} Nov 24 09:14:27 crc kubenswrapper[4719]: I1124 09:14:27.461687 4719 generic.go:334] "Generic (PLEG): container finished" podID="996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e" containerID="cbb213cd89c4180e8c8588226c99002e690f2edf775ee64ddc4e71361d03a5b8" exitCode=0 Nov 24 09:14:27 crc kubenswrapper[4719]: I1124 09:14:27.461872 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hsp2d" event={"ID":"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e","Type":"ContainerDied","Data":"cbb213cd89c4180e8c8588226c99002e690f2edf775ee64ddc4e71361d03a5b8"} Nov 24 09:14:28 crc kubenswrapper[4719]: I1124 09:14:28.858918 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:28 crc kubenswrapper[4719]: I1124 09:14:28.878220 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-config-data\") pod \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " Nov 24 09:14:28 crc kubenswrapper[4719]: I1124 09:14:28.878398 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx5zx\" (UniqueName: \"kubernetes.io/projected/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-kube-api-access-wx5zx\") pod \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " Nov 24 09:14:28 crc kubenswrapper[4719]: I1124 09:14:28.878693 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-combined-ca-bundle\") pod \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " Nov 24 09:14:28 crc kubenswrapper[4719]: I1124 09:14:28.878727 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-scripts\") pod \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\" (UID: \"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e\") " Nov 24 09:14:28 crc kubenswrapper[4719]: I1124 09:14:28.892725 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-kube-api-access-wx5zx" (OuterVolumeSpecName: "kube-api-access-wx5zx") pod "996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e" (UID: "996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e"). InnerVolumeSpecName "kube-api-access-wx5zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:14:28 crc kubenswrapper[4719]: I1124 09:14:28.893126 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-scripts" (OuterVolumeSpecName: "scripts") pod "996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e" (UID: "996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:28 crc kubenswrapper[4719]: I1124 09:14:28.932411 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e" (UID: "996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:28 crc kubenswrapper[4719]: I1124 09:14:28.942199 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-config-data" (OuterVolumeSpecName: "config-data") pod "996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e" (UID: "996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:28 crc kubenswrapper[4719]: I1124 09:14:28.981402 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx5zx\" (UniqueName: \"kubernetes.io/projected/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-kube-api-access-wx5zx\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:28 crc kubenswrapper[4719]: I1124 09:14:28.981439 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:28 crc kubenswrapper[4719]: I1124 09:14:28.981453 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:28 crc kubenswrapper[4719]: I1124 09:14:28.981464 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:29 crc kubenswrapper[4719]: I1124 09:14:29.488159 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hsp2d" event={"ID":"996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e","Type":"ContainerDied","Data":"3947d50a35cac1809af3a3ccfab320cd4b326988b9ce6f36c564ae620f4c0c66"} Nov 24 09:14:29 crc kubenswrapper[4719]: I1124 09:14:29.488227 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3947d50a35cac1809af3a3ccfab320cd4b326988b9ce6f36c564ae620f4c0c66" Nov 24 09:14:29 crc kubenswrapper[4719]: I1124 09:14:29.488273 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hsp2d" Nov 24 09:14:29 crc kubenswrapper[4719]: I1124 09:14:29.602925 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 09:14:29 crc kubenswrapper[4719]: I1124 09:14:29.605010 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 09:14:29 crc kubenswrapper[4719]: I1124 09:14:29.613083 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 09:14:29 crc kubenswrapper[4719]: I1124 09:14:29.701024 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:14:29 crc kubenswrapper[4719]: I1124 09:14:29.701292 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="67ec838d-9722-4e6e-9a57-08ec9f1acabe" containerName="nova-api-log" containerID="cri-o://bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9" gracePeriod=30 Nov 24 09:14:29 crc kubenswrapper[4719]: I1124 09:14:29.701405 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="67ec838d-9722-4e6e-9a57-08ec9f1acabe" containerName="nova-api-api" containerID="cri-o://0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b" gracePeriod=30 Nov 24 09:14:29 crc kubenswrapper[4719]: I1124 09:14:29.719183 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:14:29 crc kubenswrapper[4719]: I1124 09:14:29.719396 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="200b3f6a-9274-440c-885d-e69a1a5d69e1" containerName="nova-scheduler-scheduler" containerID="cri-o://2f6b8533594470374a859bcc8fa74514033923b43a44a2ad899fb2f05275d0bd" gracePeriod=30 Nov 24 09:14:29 crc kubenswrapper[4719]: I1124 09:14:29.772132 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.360165 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.408916 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67ec838d-9722-4e6e-9a57-08ec9f1acabe-logs\") pod \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.409076 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-combined-ca-bundle\") pod \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.409120 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-public-tls-certs\") pod \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.409157 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82h4l\" (UniqueName: \"kubernetes.io/projected/67ec838d-9722-4e6e-9a57-08ec9f1acabe-kube-api-access-82h4l\") pod \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.409181 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-internal-tls-certs\") pod \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.409233 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-config-data\") pod \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\" (UID: \"67ec838d-9722-4e6e-9a57-08ec9f1acabe\") " Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.409595 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ec838d-9722-4e6e-9a57-08ec9f1acabe-logs" (OuterVolumeSpecName: "logs") pod "67ec838d-9722-4e6e-9a57-08ec9f1acabe" (UID: "67ec838d-9722-4e6e-9a57-08ec9f1acabe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.409887 4719 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67ec838d-9722-4e6e-9a57-08ec9f1acabe-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.431761 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ec838d-9722-4e6e-9a57-08ec9f1acabe-kube-api-access-82h4l" (OuterVolumeSpecName: "kube-api-access-82h4l") pod "67ec838d-9722-4e6e-9a57-08ec9f1acabe" (UID: "67ec838d-9722-4e6e-9a57-08ec9f1acabe"). InnerVolumeSpecName "kube-api-access-82h4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.448105 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67ec838d-9722-4e6e-9a57-08ec9f1acabe" (UID: "67ec838d-9722-4e6e-9a57-08ec9f1acabe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.457687 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-config-data" (OuterVolumeSpecName: "config-data") pod "67ec838d-9722-4e6e-9a57-08ec9f1acabe" (UID: "67ec838d-9722-4e6e-9a57-08ec9f1acabe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.470407 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "67ec838d-9722-4e6e-9a57-08ec9f1acabe" (UID: "67ec838d-9722-4e6e-9a57-08ec9f1acabe"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.483483 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "67ec838d-9722-4e6e-9a57-08ec9f1acabe" (UID: "67ec838d-9722-4e6e-9a57-08ec9f1acabe"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.496673 4719 generic.go:334] "Generic (PLEG): container finished" podID="67ec838d-9722-4e6e-9a57-08ec9f1acabe" containerID="0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b" exitCode=0 Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.496721 4719 generic.go:334] "Generic (PLEG): container finished" podID="67ec838d-9722-4e6e-9a57-08ec9f1acabe" containerID="bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9" exitCode=143 Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.496736 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.496786 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"67ec838d-9722-4e6e-9a57-08ec9f1acabe","Type":"ContainerDied","Data":"0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b"} Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.496811 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"67ec838d-9722-4e6e-9a57-08ec9f1acabe","Type":"ContainerDied","Data":"bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9"} Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.496821 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"67ec838d-9722-4e6e-9a57-08ec9f1acabe","Type":"ContainerDied","Data":"c4a34f8282e83980240c851e223e8a0d27f4ed5ebd7518148358cc5aeab57a86"} Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.496835 4719 scope.go:117] "RemoveContainer" containerID="0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.504376 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.511815 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.511836 4719 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.511846 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82h4l\" (UniqueName: \"kubernetes.io/projected/67ec838d-9722-4e6e-9a57-08ec9f1acabe-kube-api-access-82h4l\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.511854 4719 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.511861 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67ec838d-9722-4e6e-9a57-08ec9f1acabe-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.524931 4719 scope.go:117] "RemoveContainer" containerID="bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.563261 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.565755 4719 scope.go:117] "RemoveContainer" containerID="0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b" Nov 24 09:14:30 crc kubenswrapper[4719]: E1124 09:14:30.567465 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b\": container with ID starting with 0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b not found: ID does not exist" containerID="0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.567494 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b"} err="failed to get container status \"0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b\": rpc error: code = NotFound desc = could not find container \"0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b\": container with ID starting with 0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b not found: ID does not exist" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.567516 4719 scope.go:117] "RemoveContainer" containerID="bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9" Nov 24 09:14:30 crc kubenswrapper[4719]: E1124 09:14:30.568208 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9\": container with ID starting with bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9 not found: ID does not exist" containerID="bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.568236 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9"} err="failed to get container status \"bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9\": rpc error: code = NotFound desc = could not find container \"bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9\": container with ID starting with bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9 not found: ID does not exist" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.568256 4719 scope.go:117] "RemoveContainer" containerID="0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.568609 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b"} err="failed to get container status \"0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b\": rpc error: code = NotFound desc = could not find container \"0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b\": container with ID starting with 0dfe8737177a54db40caac2f8c5a835bfb48ddc70a6ba255c76fcc111b707f1b not found: ID does not exist" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.568624 4719 scope.go:117] "RemoveContainer" containerID="bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.568844 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9"} err="failed to get container status \"bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9\": rpc error: code = NotFound desc = could not find container \"bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9\": container with ID starting with bbe471819384d779b5ffd547cc971e017bb6ee41c7117456b3bb8de50c86bbb9 not found: ID does not exist" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.584595 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.606186 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 09:14:30 crc kubenswrapper[4719]: E1124 09:14:30.606700 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ec838d-9722-4e6e-9a57-08ec9f1acabe" containerName="nova-api-log" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.606723 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ec838d-9722-4e6e-9a57-08ec9f1acabe" containerName="nova-api-log" Nov 24 09:14:30 crc kubenswrapper[4719]: E1124 09:14:30.606742 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e" containerName="nova-manage" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.606751 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e" containerName="nova-manage" Nov 24 09:14:30 crc kubenswrapper[4719]: E1124 09:14:30.606779 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4416ac-0d2d-4fce-a5cf-51baceca7650" containerName="dnsmasq-dns" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.606787 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4416ac-0d2d-4fce-a5cf-51baceca7650" containerName="dnsmasq-dns" Nov 24 09:14:30 crc kubenswrapper[4719]: E1124 09:14:30.606796 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4416ac-0d2d-4fce-a5cf-51baceca7650" containerName="init" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.606803 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4416ac-0d2d-4fce-a5cf-51baceca7650" containerName="init" Nov 24 09:14:30 crc kubenswrapper[4719]: E1124 09:14:30.606819 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ec838d-9722-4e6e-9a57-08ec9f1acabe" containerName="nova-api-api" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.606828 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ec838d-9722-4e6e-9a57-08ec9f1acabe" containerName="nova-api-api" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.607074 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e" containerName="nova-manage" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.607099 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ec838d-9722-4e6e-9a57-08ec9f1acabe" containerName="nova-api-log" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.607113 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ec838d-9722-4e6e-9a57-08ec9f1acabe" containerName="nova-api-api" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.607129 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4416ac-0d2d-4fce-a5cf-51baceca7650" containerName="dnsmasq-dns" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.608293 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.610318 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.610861 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.611053 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.616001 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.714743 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/007b5bfc-1e0a-4468-87ae-5fae8c196871-logs\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.714820 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/007b5bfc-1e0a-4468-87ae-5fae8c196871-public-tls-certs\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.714857 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/007b5bfc-1e0a-4468-87ae-5fae8c196871-internal-tls-certs\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.714904 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djbm8\" (UniqueName: \"kubernetes.io/projected/007b5bfc-1e0a-4468-87ae-5fae8c196871-kube-api-access-djbm8\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.714921 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/007b5bfc-1e0a-4468-87ae-5fae8c196871-config-data\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.714949 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/007b5bfc-1e0a-4468-87ae-5fae8c196871-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.816687 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/007b5bfc-1e0a-4468-87ae-5fae8c196871-logs\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.816757 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/007b5bfc-1e0a-4468-87ae-5fae8c196871-public-tls-certs\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.816787 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/007b5bfc-1e0a-4468-87ae-5fae8c196871-internal-tls-certs\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.816831 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djbm8\" (UniqueName: \"kubernetes.io/projected/007b5bfc-1e0a-4468-87ae-5fae8c196871-kube-api-access-djbm8\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.816850 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/007b5bfc-1e0a-4468-87ae-5fae8c196871-config-data\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.816876 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/007b5bfc-1e0a-4468-87ae-5fae8c196871-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.817132 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/007b5bfc-1e0a-4468-87ae-5fae8c196871-logs\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.827942 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/007b5bfc-1e0a-4468-87ae-5fae8c196871-internal-tls-certs\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.840479 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/007b5bfc-1e0a-4468-87ae-5fae8c196871-public-tls-certs\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.846083 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/007b5bfc-1e0a-4468-87ae-5fae8c196871-config-data\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.846236 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/007b5bfc-1e0a-4468-87ae-5fae8c196871-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.849526 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djbm8\" (UniqueName: \"kubernetes.io/projected/007b5bfc-1e0a-4468-87ae-5fae8c196871-kube-api-access-djbm8\") pod \"nova-api-0\" (UID: \"007b5bfc-1e0a-4468-87ae-5fae8c196871\") " pod="openstack/nova-api-0" Nov 24 09:14:30 crc kubenswrapper[4719]: I1124 09:14:30.931577 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 09:14:31 crc kubenswrapper[4719]: I1124 09:14:31.390974 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 09:14:31 crc kubenswrapper[4719]: I1124 09:14:31.505070 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"007b5bfc-1e0a-4468-87ae-5fae8c196871","Type":"ContainerStarted","Data":"62b25d8ff1d5610880a89398467f181365f6629a17e3b8f7b535ee68e9c304f4"} Nov 24 09:14:31 crc kubenswrapper[4719]: I1124 09:14:31.506645 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" containerName="nova-metadata-metadata" containerID="cri-o://dc311935100b7753c738cbcf25d7264a9d76df79dcdc37e74d15e3f703f7f2aa" gracePeriod=30 Nov 24 09:14:31 crc kubenswrapper[4719]: I1124 09:14:31.506785 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" containerName="nova-metadata-log" containerID="cri-o://a9e12b73e855495cd7d2a9c16b6a20ec8a84b18dbc400875cddd4bd8f1693911" gracePeriod=30 Nov 24 09:14:31 crc kubenswrapper[4719]: I1124 09:14:31.799125 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:14:31 crc kubenswrapper[4719]: I1124 09:14:31.841717 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sql7s\" (UniqueName: \"kubernetes.io/projected/200b3f6a-9274-440c-885d-e69a1a5d69e1-kube-api-access-sql7s\") pod \"200b3f6a-9274-440c-885d-e69a1a5d69e1\" (UID: \"200b3f6a-9274-440c-885d-e69a1a5d69e1\") " Nov 24 09:14:31 crc kubenswrapper[4719]: I1124 09:14:31.841810 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/200b3f6a-9274-440c-885d-e69a1a5d69e1-config-data\") pod \"200b3f6a-9274-440c-885d-e69a1a5d69e1\" (UID: \"200b3f6a-9274-440c-885d-e69a1a5d69e1\") " Nov 24 09:14:31 crc kubenswrapper[4719]: I1124 09:14:31.841950 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200b3f6a-9274-440c-885d-e69a1a5d69e1-combined-ca-bundle\") pod \"200b3f6a-9274-440c-885d-e69a1a5d69e1\" (UID: \"200b3f6a-9274-440c-885d-e69a1a5d69e1\") " Nov 24 09:14:31 crc kubenswrapper[4719]: I1124 09:14:31.862403 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200b3f6a-9274-440c-885d-e69a1a5d69e1-kube-api-access-sql7s" (OuterVolumeSpecName: "kube-api-access-sql7s") pod "200b3f6a-9274-440c-885d-e69a1a5d69e1" (UID: "200b3f6a-9274-440c-885d-e69a1a5d69e1"). InnerVolumeSpecName "kube-api-access-sql7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:14:31 crc kubenswrapper[4719]: I1124 09:14:31.914395 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200b3f6a-9274-440c-885d-e69a1a5d69e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "200b3f6a-9274-440c-885d-e69a1a5d69e1" (UID: "200b3f6a-9274-440c-885d-e69a1a5d69e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:31 crc kubenswrapper[4719]: I1124 09:14:31.918376 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200b3f6a-9274-440c-885d-e69a1a5d69e1-config-data" (OuterVolumeSpecName: "config-data") pod "200b3f6a-9274-440c-885d-e69a1a5d69e1" (UID: "200b3f6a-9274-440c-885d-e69a1a5d69e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:31 crc kubenswrapper[4719]: I1124 09:14:31.944700 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sql7s\" (UniqueName: \"kubernetes.io/projected/200b3f6a-9274-440c-885d-e69a1a5d69e1-kube-api-access-sql7s\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:31 crc kubenswrapper[4719]: I1124 09:14:31.944741 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/200b3f6a-9274-440c-885d-e69a1a5d69e1-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:31 crc kubenswrapper[4719]: I1124 09:14:31.944749 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200b3f6a-9274-440c-885d-e69a1a5d69e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.517171 4719 generic.go:334] "Generic (PLEG): container finished" podID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" containerID="a9e12b73e855495cd7d2a9c16b6a20ec8a84b18dbc400875cddd4bd8f1693911" exitCode=143 Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.517252 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8dff5214-54a8-41e2-9e2d-d1e491ba2565","Type":"ContainerDied","Data":"a9e12b73e855495cd7d2a9c16b6a20ec8a84b18dbc400875cddd4bd8f1693911"} Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.522621 4719 generic.go:334] "Generic (PLEG): container finished" podID="200b3f6a-9274-440c-885d-e69a1a5d69e1" containerID="2f6b8533594470374a859bcc8fa74514033923b43a44a2ad899fb2f05275d0bd" exitCode=0 Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.522757 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.550726 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ec838d-9722-4e6e-9a57-08ec9f1acabe" path="/var/lib/kubelet/pods/67ec838d-9722-4e6e-9a57-08ec9f1acabe/volumes" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.551797 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"007b5bfc-1e0a-4468-87ae-5fae8c196871","Type":"ContainerStarted","Data":"8d00f8dbff72f4fea72b3245a56398b97e5531be40b500733b1421432ee2fe63"} Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.551852 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"007b5bfc-1e0a-4468-87ae-5fae8c196871","Type":"ContainerStarted","Data":"e69917dd28ddda07a28e7dc726d766ce4d4643c095e84c08ef7e148da030086a"} Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.551868 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"200b3f6a-9274-440c-885d-e69a1a5d69e1","Type":"ContainerDied","Data":"2f6b8533594470374a859bcc8fa74514033923b43a44a2ad899fb2f05275d0bd"} Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.551883 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"200b3f6a-9274-440c-885d-e69a1a5d69e1","Type":"ContainerDied","Data":"6f4610f0581e954f564caee9ef04e1e2d56aad0337057d158aee9b247bb4fadd"} Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.551904 4719 scope.go:117] "RemoveContainer" containerID="2f6b8533594470374a859bcc8fa74514033923b43a44a2ad899fb2f05275d0bd" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.564276 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5642570190000002 podStartE2EDuration="2.564257019s" podCreationTimestamp="2025-11-24 09:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:14:32.560276855 +0000 UTC m=+1248.891550127" watchObservedRunningTime="2025-11-24 09:14:32.564257019 +0000 UTC m=+1248.895530271" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.587255 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.598997 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.607532 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.608296 4719 scope.go:117] "RemoveContainer" containerID="2f6b8533594470374a859bcc8fa74514033923b43a44a2ad899fb2f05275d0bd" Nov 24 09:14:32 crc kubenswrapper[4719]: E1124 09:14:32.608503 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200b3f6a-9274-440c-885d-e69a1a5d69e1" containerName="nova-scheduler-scheduler" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.608619 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="200b3f6a-9274-440c-885d-e69a1a5d69e1" containerName="nova-scheduler-scheduler" Nov 24 09:14:32 crc kubenswrapper[4719]: E1124 09:14:32.608787 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f6b8533594470374a859bcc8fa74514033923b43a44a2ad899fb2f05275d0bd\": container with ID starting with 2f6b8533594470374a859bcc8fa74514033923b43a44a2ad899fb2f05275d0bd not found: ID does not exist" containerID="2f6b8533594470374a859bcc8fa74514033923b43a44a2ad899fb2f05275d0bd" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.608826 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f6b8533594470374a859bcc8fa74514033923b43a44a2ad899fb2f05275d0bd"} err="failed to get container status \"2f6b8533594470374a859bcc8fa74514033923b43a44a2ad899fb2f05275d0bd\": rpc error: code = NotFound desc = could not find container \"2f6b8533594470374a859bcc8fa74514033923b43a44a2ad899fb2f05275d0bd\": container with ID starting with 2f6b8533594470374a859bcc8fa74514033923b43a44a2ad899fb2f05275d0bd not found: ID does not exist" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.609085 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="200b3f6a-9274-440c-885d-e69a1a5d69e1" containerName="nova-scheduler-scheduler" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.609885 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.612659 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.618338 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.764559 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65fhh\" (UniqueName: \"kubernetes.io/projected/e543db5c-487f-4724-91aa-c3ea4cb33149-kube-api-access-65fhh\") pod \"nova-scheduler-0\" (UID: \"e543db5c-487f-4724-91aa-c3ea4cb33149\") " pod="openstack/nova-scheduler-0" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.764738 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e543db5c-487f-4724-91aa-c3ea4cb33149-config-data\") pod \"nova-scheduler-0\" (UID: \"e543db5c-487f-4724-91aa-c3ea4cb33149\") " pod="openstack/nova-scheduler-0" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.764784 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e543db5c-487f-4724-91aa-c3ea4cb33149-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e543db5c-487f-4724-91aa-c3ea4cb33149\") " pod="openstack/nova-scheduler-0" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.867554 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65fhh\" (UniqueName: \"kubernetes.io/projected/e543db5c-487f-4724-91aa-c3ea4cb33149-kube-api-access-65fhh\") pod \"nova-scheduler-0\" (UID: \"e543db5c-487f-4724-91aa-c3ea4cb33149\") " pod="openstack/nova-scheduler-0" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.867707 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e543db5c-487f-4724-91aa-c3ea4cb33149-config-data\") pod \"nova-scheduler-0\" (UID: \"e543db5c-487f-4724-91aa-c3ea4cb33149\") " pod="openstack/nova-scheduler-0" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.867757 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e543db5c-487f-4724-91aa-c3ea4cb33149-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e543db5c-487f-4724-91aa-c3ea4cb33149\") " pod="openstack/nova-scheduler-0" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.881391 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e543db5c-487f-4724-91aa-c3ea4cb33149-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e543db5c-487f-4724-91aa-c3ea4cb33149\") " pod="openstack/nova-scheduler-0" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.881534 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e543db5c-487f-4724-91aa-c3ea4cb33149-config-data\") pod \"nova-scheduler-0\" (UID: \"e543db5c-487f-4724-91aa-c3ea4cb33149\") " pod="openstack/nova-scheduler-0" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.886662 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65fhh\" (UniqueName: \"kubernetes.io/projected/e543db5c-487f-4724-91aa-c3ea4cb33149-kube-api-access-65fhh\") pod \"nova-scheduler-0\" (UID: \"e543db5c-487f-4724-91aa-c3ea4cb33149\") " pod="openstack/nova-scheduler-0" Nov 24 09:14:32 crc kubenswrapper[4719]: I1124 09:14:32.948741 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 09:14:33 crc kubenswrapper[4719]: I1124 09:14:33.530337 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 09:14:34 crc kubenswrapper[4719]: I1124 09:14:34.534853 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="200b3f6a-9274-440c-885d-e69a1a5d69e1" path="/var/lib/kubelet/pods/200b3f6a-9274-440c-885d-e69a1a5d69e1/volumes" Nov 24 09:14:34 crc kubenswrapper[4719]: I1124 09:14:34.552085 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e543db5c-487f-4724-91aa-c3ea4cb33149","Type":"ContainerStarted","Data":"68f66c5e5bd6b306ecf4f97bb2fbe45956213d377f811af93dbd971707b013da"} Nov 24 09:14:34 crc kubenswrapper[4719]: I1124 09:14:34.552135 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e543db5c-487f-4724-91aa-c3ea4cb33149","Type":"ContainerStarted","Data":"24d0417b9d49f35ac99cfc6153ff63a6190204cb25d7880dcce9c739bcdcc8dc"} Nov 24 09:14:34 crc kubenswrapper[4719]: I1124 09:14:34.583416 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.583389042 podStartE2EDuration="2.583389042s" podCreationTimestamp="2025-11-24 09:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:14:34.571118829 +0000 UTC m=+1250.902392161" watchObservedRunningTime="2025-11-24 09:14:34.583389042 +0000 UTC m=+1250.914662324" Nov 24 09:14:34 crc kubenswrapper[4719]: I1124 09:14:34.595290 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.181:8775/\": dial tcp 10.217.0.181:8775: connect: connection refused" Nov 24 09:14:34 crc kubenswrapper[4719]: I1124 09:14:34.595326 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.181:8775/\": dial tcp 10.217.0.181:8775: connect: connection refused" Nov 24 09:14:34 crc kubenswrapper[4719]: I1124 09:14:34.968181 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.026339 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-nova-metadata-tls-certs\") pod \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.026405 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dff5214-54a8-41e2-9e2d-d1e491ba2565-logs\") pod \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.026434 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-config-data\") pod \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.026481 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5p5r\" (UniqueName: \"kubernetes.io/projected/8dff5214-54a8-41e2-9e2d-d1e491ba2565-kube-api-access-t5p5r\") pod \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.026551 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-combined-ca-bundle\") pod \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\" (UID: \"8dff5214-54a8-41e2-9e2d-d1e491ba2565\") " Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.026939 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dff5214-54a8-41e2-9e2d-d1e491ba2565-logs" (OuterVolumeSpecName: "logs") pod "8dff5214-54a8-41e2-9e2d-d1e491ba2565" (UID: "8dff5214-54a8-41e2-9e2d-d1e491ba2565"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.044251 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dff5214-54a8-41e2-9e2d-d1e491ba2565-kube-api-access-t5p5r" (OuterVolumeSpecName: "kube-api-access-t5p5r") pod "8dff5214-54a8-41e2-9e2d-d1e491ba2565" (UID: "8dff5214-54a8-41e2-9e2d-d1e491ba2565"). InnerVolumeSpecName "kube-api-access-t5p5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.055159 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8dff5214-54a8-41e2-9e2d-d1e491ba2565" (UID: "8dff5214-54a8-41e2-9e2d-d1e491ba2565"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.074130 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-config-data" (OuterVolumeSpecName: "config-data") pod "8dff5214-54a8-41e2-9e2d-d1e491ba2565" (UID: "8dff5214-54a8-41e2-9e2d-d1e491ba2565"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.088404 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8dff5214-54a8-41e2-9e2d-d1e491ba2565" (UID: "8dff5214-54a8-41e2-9e2d-d1e491ba2565"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.128710 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5p5r\" (UniqueName: \"kubernetes.io/projected/8dff5214-54a8-41e2-9e2d-d1e491ba2565-kube-api-access-t5p5r\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.128754 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.128764 4719 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.128772 4719 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dff5214-54a8-41e2-9e2d-d1e491ba2565-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.128781 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dff5214-54a8-41e2-9e2d-d1e491ba2565-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.562468 4719 generic.go:334] "Generic (PLEG): container finished" podID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" containerID="dc311935100b7753c738cbcf25d7264a9d76df79dcdc37e74d15e3f703f7f2aa" exitCode=0 Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.562554 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8dff5214-54a8-41e2-9e2d-d1e491ba2565","Type":"ContainerDied","Data":"dc311935100b7753c738cbcf25d7264a9d76df79dcdc37e74d15e3f703f7f2aa"} Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.562612 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8dff5214-54a8-41e2-9e2d-d1e491ba2565","Type":"ContainerDied","Data":"98d11bcce40f06aa9fa914b8d0780cba03f962c38cc22092f1f2d73d24a8b2f6"} Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.562636 4719 scope.go:117] "RemoveContainer" containerID="dc311935100b7753c738cbcf25d7264a9d76df79dcdc37e74d15e3f703f7f2aa" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.563502 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.603596 4719 scope.go:117] "RemoveContainer" containerID="a9e12b73e855495cd7d2a9c16b6a20ec8a84b18dbc400875cddd4bd8f1693911" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.611663 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.640981 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.648810 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:14:35 crc kubenswrapper[4719]: E1124 09:14:35.649264 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" containerName="nova-metadata-log" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.649280 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" containerName="nova-metadata-log" Nov 24 09:14:35 crc kubenswrapper[4719]: E1124 09:14:35.649309 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" containerName="nova-metadata-metadata" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.649315 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" containerName="nova-metadata-metadata" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.650153 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" containerName="nova-metadata-metadata" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.650177 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" containerName="nova-metadata-log" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.651215 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.655442 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.658026 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.658396 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.664477 4719 scope.go:117] "RemoveContainer" containerID="dc311935100b7753c738cbcf25d7264a9d76df79dcdc37e74d15e3f703f7f2aa" Nov 24 09:14:35 crc kubenswrapper[4719]: E1124 09:14:35.664956 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc311935100b7753c738cbcf25d7264a9d76df79dcdc37e74d15e3f703f7f2aa\": container with ID starting with dc311935100b7753c738cbcf25d7264a9d76df79dcdc37e74d15e3f703f7f2aa not found: ID does not exist" containerID="dc311935100b7753c738cbcf25d7264a9d76df79dcdc37e74d15e3f703f7f2aa" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.665000 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc311935100b7753c738cbcf25d7264a9d76df79dcdc37e74d15e3f703f7f2aa"} err="failed to get container status \"dc311935100b7753c738cbcf25d7264a9d76df79dcdc37e74d15e3f703f7f2aa\": rpc error: code = NotFound desc = could not find container \"dc311935100b7753c738cbcf25d7264a9d76df79dcdc37e74d15e3f703f7f2aa\": container with ID starting with dc311935100b7753c738cbcf25d7264a9d76df79dcdc37e74d15e3f703f7f2aa not found: ID does not exist" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.665017 4719 scope.go:117] "RemoveContainer" containerID="a9e12b73e855495cd7d2a9c16b6a20ec8a84b18dbc400875cddd4bd8f1693911" Nov 24 09:14:35 crc kubenswrapper[4719]: E1124 09:14:35.665200 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9e12b73e855495cd7d2a9c16b6a20ec8a84b18dbc400875cddd4bd8f1693911\": container with ID starting with a9e12b73e855495cd7d2a9c16b6a20ec8a84b18dbc400875cddd4bd8f1693911 not found: ID does not exist" containerID="a9e12b73e855495cd7d2a9c16b6a20ec8a84b18dbc400875cddd4bd8f1693911" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.665216 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9e12b73e855495cd7d2a9c16b6a20ec8a84b18dbc400875cddd4bd8f1693911"} err="failed to get container status \"a9e12b73e855495cd7d2a9c16b6a20ec8a84b18dbc400875cddd4bd8f1693911\": rpc error: code = NotFound desc = could not find container \"a9e12b73e855495cd7d2a9c16b6a20ec8a84b18dbc400875cddd4bd8f1693911\": container with ID starting with a9e12b73e855495cd7d2a9c16b6a20ec8a84b18dbc400875cddd4bd8f1693911 not found: ID does not exist" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.742537 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3facc49a-dd07-4db6-b353-a06ff01dc19c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.742625 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csxln\" (UniqueName: \"kubernetes.io/projected/3facc49a-dd07-4db6-b353-a06ff01dc19c-kube-api-access-csxln\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.742661 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3facc49a-dd07-4db6-b353-a06ff01dc19c-logs\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.742754 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3facc49a-dd07-4db6-b353-a06ff01dc19c-config-data\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.742867 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3facc49a-dd07-4db6-b353-a06ff01dc19c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.844422 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3facc49a-dd07-4db6-b353-a06ff01dc19c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.844498 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csxln\" (UniqueName: \"kubernetes.io/projected/3facc49a-dd07-4db6-b353-a06ff01dc19c-kube-api-access-csxln\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.844523 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3facc49a-dd07-4db6-b353-a06ff01dc19c-logs\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.844566 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3facc49a-dd07-4db6-b353-a06ff01dc19c-config-data\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.844585 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3facc49a-dd07-4db6-b353-a06ff01dc19c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.846495 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3facc49a-dd07-4db6-b353-a06ff01dc19c-logs\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.849377 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3facc49a-dd07-4db6-b353-a06ff01dc19c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.849547 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3facc49a-dd07-4db6-b353-a06ff01dc19c-config-data\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.849837 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3facc49a-dd07-4db6-b353-a06ff01dc19c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.869628 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csxln\" (UniqueName: \"kubernetes.io/projected/3facc49a-dd07-4db6-b353-a06ff01dc19c-kube-api-access-csxln\") pod \"nova-metadata-0\" (UID: \"3facc49a-dd07-4db6-b353-a06ff01dc19c\") " pod="openstack/nova-metadata-0" Nov 24 09:14:35 crc kubenswrapper[4719]: I1124 09:14:35.984976 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 09:14:36 crc kubenswrapper[4719]: I1124 09:14:36.427381 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 09:14:36 crc kubenswrapper[4719]: W1124 09:14:36.427705 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3facc49a_dd07_4db6_b353_a06ff01dc19c.slice/crio-19012c50bfbb7ae7379bdd3f98d4ddd52cddfae45062e44e6906e2ec03d6812b WatchSource:0}: Error finding container 19012c50bfbb7ae7379bdd3f98d4ddd52cddfae45062e44e6906e2ec03d6812b: Status 404 returned error can't find the container with id 19012c50bfbb7ae7379bdd3f98d4ddd52cddfae45062e44e6906e2ec03d6812b Nov 24 09:14:36 crc kubenswrapper[4719]: I1124 09:14:36.532076 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dff5214-54a8-41e2-9e2d-d1e491ba2565" path="/var/lib/kubelet/pods/8dff5214-54a8-41e2-9e2d-d1e491ba2565/volumes" Nov 24 09:14:36 crc kubenswrapper[4719]: I1124 09:14:36.572922 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3facc49a-dd07-4db6-b353-a06ff01dc19c","Type":"ContainerStarted","Data":"19012c50bfbb7ae7379bdd3f98d4ddd52cddfae45062e44e6906e2ec03d6812b"} Nov 24 09:14:37 crc kubenswrapper[4719]: I1124 09:14:37.586876 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3facc49a-dd07-4db6-b353-a06ff01dc19c","Type":"ContainerStarted","Data":"617687cac9158ced4528d5a749b5642742f3f480fe91b234f71566378d38d3ed"} Nov 24 09:14:37 crc kubenswrapper[4719]: I1124 09:14:37.587262 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3facc49a-dd07-4db6-b353-a06ff01dc19c","Type":"ContainerStarted","Data":"3598391e17262d4b680d89ba4390cf98be46237783b0dc5362248ac9715171eb"} Nov 24 09:14:37 crc kubenswrapper[4719]: I1124 09:14:37.628779 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.628756716 podStartE2EDuration="2.628756716s" podCreationTimestamp="2025-11-24 09:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:14:37.612729484 +0000 UTC m=+1253.944002766" watchObservedRunningTime="2025-11-24 09:14:37.628756716 +0000 UTC m=+1253.960029978" Nov 24 09:14:37 crc kubenswrapper[4719]: I1124 09:14:37.948901 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 09:14:40 crc kubenswrapper[4719]: I1124 09:14:40.932648 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 09:14:40 crc kubenswrapper[4719]: I1124 09:14:40.933250 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 09:14:40 crc kubenswrapper[4719]: I1124 09:14:40.985639 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 09:14:40 crc kubenswrapper[4719]: I1124 09:14:40.985994 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 09:14:41 crc kubenswrapper[4719]: I1124 09:14:41.952182 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="007b5bfc-1e0a-4468-87ae-5fae8c196871" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.186:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 09:14:41 crc kubenswrapper[4719]: I1124 09:14:41.952215 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="007b5bfc-1e0a-4468-87ae-5fae8c196871" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.186:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 09:14:42 crc kubenswrapper[4719]: I1124 09:14:42.949619 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 09:14:42 crc kubenswrapper[4719]: I1124 09:14:42.981299 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 09:14:43 crc kubenswrapper[4719]: I1124 09:14:43.691375 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 09:14:45 crc kubenswrapper[4719]: I1124 09:14:45.986143 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 09:14:45 crc kubenswrapper[4719]: I1124 09:14:45.986549 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 09:14:46 crc kubenswrapper[4719]: I1124 09:14:46.926914 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:14:46 crc kubenswrapper[4719]: I1124 09:14:46.936964 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 09:14:46 crc kubenswrapper[4719]: I1124 09:14:46.998269 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3facc49a-dd07-4db6-b353-a06ff01dc19c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 09:14:46 crc kubenswrapper[4719]: I1124 09:14:46.998273 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3facc49a-dd07-4db6-b353-a06ff01dc19c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:14:50 crc kubenswrapper[4719]: I1124 09:14:50.937633 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 09:14:50 crc kubenswrapper[4719]: I1124 09:14:50.938318 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 09:14:50 crc kubenswrapper[4719]: I1124 09:14:50.938809 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 09:14:50 crc kubenswrapper[4719]: I1124 09:14:50.938832 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 09:14:50 crc kubenswrapper[4719]: I1124 09:14:50.944727 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 09:14:50 crc kubenswrapper[4719]: I1124 09:14:50.948921 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 09:14:51 crc kubenswrapper[4719]: I1124 09:14:51.753831 4719 generic.go:334] "Generic (PLEG): container finished" podID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerID="bb78166c61d9d949a394bf43ef5a970c3200d43ce137c3a492a0baeebfeae44f" exitCode=137 Nov 24 09:14:51 crc kubenswrapper[4719]: I1124 09:14:51.753966 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d11b298c-1e1b-4501-b8ec-fdf3e2c77254","Type":"ContainerDied","Data":"bb78166c61d9d949a394bf43ef5a970c3200d43ce137c3a492a0baeebfeae44f"} Nov 24 09:14:51 crc kubenswrapper[4719]: I1124 09:14:51.897980 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.042223 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-sg-core-conf-yaml\") pod \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.042280 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-combined-ca-bundle\") pod \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.042314 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-run-httpd\") pod \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.042357 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-log-httpd\") pod \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.042402 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-scripts\") pod \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.042446 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-ceilometer-tls-certs\") pod \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.042536 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-config-data\") pod \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.042624 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn6bh\" (UniqueName: \"kubernetes.io/projected/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-kube-api-access-pn6bh\") pod \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\" (UID: \"d11b298c-1e1b-4501-b8ec-fdf3e2c77254\") " Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.043616 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d11b298c-1e1b-4501-b8ec-fdf3e2c77254" (UID: "d11b298c-1e1b-4501-b8ec-fdf3e2c77254"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.044215 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d11b298c-1e1b-4501-b8ec-fdf3e2c77254" (UID: "d11b298c-1e1b-4501-b8ec-fdf3e2c77254"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.048427 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-scripts" (OuterVolumeSpecName: "scripts") pod "d11b298c-1e1b-4501-b8ec-fdf3e2c77254" (UID: "d11b298c-1e1b-4501-b8ec-fdf3e2c77254"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.048967 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-kube-api-access-pn6bh" (OuterVolumeSpecName: "kube-api-access-pn6bh") pod "d11b298c-1e1b-4501-b8ec-fdf3e2c77254" (UID: "d11b298c-1e1b-4501-b8ec-fdf3e2c77254"). InnerVolumeSpecName "kube-api-access-pn6bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.073800 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d11b298c-1e1b-4501-b8ec-fdf3e2c77254" (UID: "d11b298c-1e1b-4501-b8ec-fdf3e2c77254"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.123717 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d11b298c-1e1b-4501-b8ec-fdf3e2c77254" (UID: "d11b298c-1e1b-4501-b8ec-fdf3e2c77254"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.132573 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d11b298c-1e1b-4501-b8ec-fdf3e2c77254" (UID: "d11b298c-1e1b-4501-b8ec-fdf3e2c77254"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.144882 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.144978 4719 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.144988 4719 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.144997 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.145005 4719 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.145015 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn6bh\" (UniqueName: \"kubernetes.io/projected/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-kube-api-access-pn6bh\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.145025 4719 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.154261 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-config-data" (OuterVolumeSpecName: "config-data") pod "d11b298c-1e1b-4501-b8ec-fdf3e2c77254" (UID: "d11b298c-1e1b-4501-b8ec-fdf3e2c77254"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.246430 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d11b298c-1e1b-4501-b8ec-fdf3e2c77254-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.765887 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.766192 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d11b298c-1e1b-4501-b8ec-fdf3e2c77254","Type":"ContainerDied","Data":"5269c7b5714395aa319b838c14af8dff56b570c20ab34eb588a846e78dd6d9d6"} Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.766260 4719 scope.go:117] "RemoveContainer" containerID="bb78166c61d9d949a394bf43ef5a970c3200d43ce137c3a492a0baeebfeae44f" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.799319 4719 scope.go:117] "RemoveContainer" containerID="5dff983b0eb6808ab3bc4493eb064e9e9d1dcac26865840ee3d6418a7d3306de" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.804819 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.822398 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.830218 4719 scope.go:117] "RemoveContainer" containerID="159aecec2c7c260ffe4959f8b626e510dd5e6ec7340352a9f7d4c4e85f252b2b" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.833128 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:14:52 crc kubenswrapper[4719]: E1124 09:14:52.833723 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="ceilometer-notification-agent" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.833748 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="ceilometer-notification-agent" Nov 24 09:14:52 crc kubenswrapper[4719]: E1124 09:14:52.833774 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="ceilometer-central-agent" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.833783 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="ceilometer-central-agent" Nov 24 09:14:52 crc kubenswrapper[4719]: E1124 09:14:52.833803 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="sg-core" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.833811 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="sg-core" Nov 24 09:14:52 crc kubenswrapper[4719]: E1124 09:14:52.833830 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="proxy-httpd" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.833838 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="proxy-httpd" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.834059 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="sg-core" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.834081 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="proxy-httpd" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.834095 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="ceilometer-notification-agent" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.834113 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" containerName="ceilometer-central-agent" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.837187 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.851213 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.852303 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.852324 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.852539 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.904585 4719 scope.go:117] "RemoveContainer" containerID="5159834a7eeea2e2deed0ddbc09521c15bcc861c8a6b6817698ef52167b224f1" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.959166 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-scripts\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.959208 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62091726-7f9c-439d-a39a-54ce59e0130b-run-httpd\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.959261 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62091726-7f9c-439d-a39a-54ce59e0130b-log-httpd\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.959287 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.959310 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-config-data\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.959333 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.959392 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28qqc\" (UniqueName: \"kubernetes.io/projected/62091726-7f9c-439d-a39a-54ce59e0130b-kube-api-access-28qqc\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:52 crc kubenswrapper[4719]: I1124 09:14:52.959408 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.060844 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62091726-7f9c-439d-a39a-54ce59e0130b-log-httpd\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.060905 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.060944 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-config-data\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.060975 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.061051 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28qqc\" (UniqueName: \"kubernetes.io/projected/62091726-7f9c-439d-a39a-54ce59e0130b-kube-api-access-28qqc\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.061074 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.061212 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-scripts\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.061238 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62091726-7f9c-439d-a39a-54ce59e0130b-run-httpd\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.061792 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62091726-7f9c-439d-a39a-54ce59e0130b-run-httpd\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.062060 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62091726-7f9c-439d-a39a-54ce59e0130b-log-httpd\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.065506 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.067864 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.075300 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-scripts\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.086708 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.088805 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-config-data\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.095583 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28qqc\" (UniqueName: \"kubernetes.io/projected/62091726-7f9c-439d-a39a-54ce59e0130b-kube-api-access-28qqc\") pod \"ceilometer-0\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.164659 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:14:53 crc kubenswrapper[4719]: W1124 09:14:53.673775 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62091726_7f9c_439d_a39a_54ce59e0130b.slice/crio-1ec7485d58cefbaf3cf8d282bfd5f7fe935eb8182179dbfd369ecd3e18fe4ed3 WatchSource:0}: Error finding container 1ec7485d58cefbaf3cf8d282bfd5f7fe935eb8182179dbfd369ecd3e18fe4ed3: Status 404 returned error can't find the container with id 1ec7485d58cefbaf3cf8d282bfd5f7fe935eb8182179dbfd369ecd3e18fe4ed3 Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.676684 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:14:53 crc kubenswrapper[4719]: I1124 09:14:53.775578 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62091726-7f9c-439d-a39a-54ce59e0130b","Type":"ContainerStarted","Data":"1ec7485d58cefbaf3cf8d282bfd5f7fe935eb8182179dbfd369ecd3e18fe4ed3"} Nov 24 09:14:54 crc kubenswrapper[4719]: I1124 09:14:54.532090 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11b298c-1e1b-4501-b8ec-fdf3e2c77254" path="/var/lib/kubelet/pods/d11b298c-1e1b-4501-b8ec-fdf3e2c77254/volumes" Nov 24 09:14:54 crc kubenswrapper[4719]: I1124 09:14:54.789249 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62091726-7f9c-439d-a39a-54ce59e0130b","Type":"ContainerStarted","Data":"68b306888f5524ae2c072d6156995c841184149b57d112d2f15a78e6bae82ac3"} Nov 24 09:14:55 crc kubenswrapper[4719]: I1124 09:14:55.799379 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62091726-7f9c-439d-a39a-54ce59e0130b","Type":"ContainerStarted","Data":"400edd73c098a1a6dafb0d4ca888f593bab289641aac9c609a6b5562d406bcfa"} Nov 24 09:14:55 crc kubenswrapper[4719]: I1124 09:14:55.799710 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62091726-7f9c-439d-a39a-54ce59e0130b","Type":"ContainerStarted","Data":"4c9552b2f51e8194754c00e5b74df4f294fb35dd3caf9e2d9f19c6c7c5dc7935"} Nov 24 09:14:55 crc kubenswrapper[4719]: I1124 09:14:55.990830 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 09:14:55 crc kubenswrapper[4719]: I1124 09:14:55.991971 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 09:14:56 crc kubenswrapper[4719]: I1124 09:14:56.004962 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 09:14:56 crc kubenswrapper[4719]: I1124 09:14:56.817007 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 09:14:57 crc kubenswrapper[4719]: I1124 09:14:57.818611 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62091726-7f9c-439d-a39a-54ce59e0130b","Type":"ContainerStarted","Data":"29d9969e3cd5ff0dc73abea3e92356e9f6ebb0cb6e8d5068348f0030f502f1d5"} Nov 24 09:14:57 crc kubenswrapper[4719]: I1124 09:14:57.818932 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:14:57 crc kubenswrapper[4719]: I1124 09:14:57.838548 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.598287861 podStartE2EDuration="5.838532617s" podCreationTimestamp="2025-11-24 09:14:52 +0000 UTC" firstStartedPulling="2025-11-24 09:14:53.676025474 +0000 UTC m=+1270.007298726" lastFinishedPulling="2025-11-24 09:14:56.91627023 +0000 UTC m=+1273.247543482" observedRunningTime="2025-11-24 09:14:57.836405976 +0000 UTC m=+1274.167679238" watchObservedRunningTime="2025-11-24 09:14:57.838532617 +0000 UTC m=+1274.169805869" Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.148618 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48"] Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.150723 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.153024 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.153112 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.191291 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48"] Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.194060 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rf5x\" (UniqueName: \"kubernetes.io/projected/f5997f68-a992-410f-839f-80a8fac64cb1-kube-api-access-8rf5x\") pod \"collect-profiles-29399595-k2g48\" (UID: \"f5997f68-a992-410f-839f-80a8fac64cb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.194111 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5997f68-a992-410f-839f-80a8fac64cb1-config-volume\") pod \"collect-profiles-29399595-k2g48\" (UID: \"f5997f68-a992-410f-839f-80a8fac64cb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.194189 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5997f68-a992-410f-839f-80a8fac64cb1-secret-volume\") pod \"collect-profiles-29399595-k2g48\" (UID: \"f5997f68-a992-410f-839f-80a8fac64cb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.296962 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5997f68-a992-410f-839f-80a8fac64cb1-secret-volume\") pod \"collect-profiles-29399595-k2g48\" (UID: \"f5997f68-a992-410f-839f-80a8fac64cb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.297143 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rf5x\" (UniqueName: \"kubernetes.io/projected/f5997f68-a992-410f-839f-80a8fac64cb1-kube-api-access-8rf5x\") pod \"collect-profiles-29399595-k2g48\" (UID: \"f5997f68-a992-410f-839f-80a8fac64cb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.297187 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5997f68-a992-410f-839f-80a8fac64cb1-config-volume\") pod \"collect-profiles-29399595-k2g48\" (UID: \"f5997f68-a992-410f-839f-80a8fac64cb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.298504 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5997f68-a992-410f-839f-80a8fac64cb1-config-volume\") pod \"collect-profiles-29399595-k2g48\" (UID: \"f5997f68-a992-410f-839f-80a8fac64cb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.320660 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5997f68-a992-410f-839f-80a8fac64cb1-secret-volume\") pod \"collect-profiles-29399595-k2g48\" (UID: \"f5997f68-a992-410f-839f-80a8fac64cb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.326844 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rf5x\" (UniqueName: \"kubernetes.io/projected/f5997f68-a992-410f-839f-80a8fac64cb1-kube-api-access-8rf5x\") pod \"collect-profiles-29399595-k2g48\" (UID: \"f5997f68-a992-410f-839f-80a8fac64cb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" Nov 24 09:15:00 crc kubenswrapper[4719]: I1124 09:15:00.481141 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" Nov 24 09:15:01 crc kubenswrapper[4719]: I1124 09:15:01.077536 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48"] Nov 24 09:15:01 crc kubenswrapper[4719]: I1124 09:15:01.891136 4719 generic.go:334] "Generic (PLEG): container finished" podID="f5997f68-a992-410f-839f-80a8fac64cb1" containerID="1fa600130321fbab71ba96891333dbb1beffc7363f8f3684bc285a17baf6ed45" exitCode=0 Nov 24 09:15:01 crc kubenswrapper[4719]: I1124 09:15:01.891187 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" event={"ID":"f5997f68-a992-410f-839f-80a8fac64cb1","Type":"ContainerDied","Data":"1fa600130321fbab71ba96891333dbb1beffc7363f8f3684bc285a17baf6ed45"} Nov 24 09:15:01 crc kubenswrapper[4719]: I1124 09:15:01.891466 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" event={"ID":"f5997f68-a992-410f-839f-80a8fac64cb1","Type":"ContainerStarted","Data":"48463ed5bff103d88f6d0033590f0dbb09c37bb330459f2ea620a764fc6617b6"} Nov 24 09:15:03 crc kubenswrapper[4719]: I1124 09:15:03.265055 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" Nov 24 09:15:03 crc kubenswrapper[4719]: I1124 09:15:03.371834 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rf5x\" (UniqueName: \"kubernetes.io/projected/f5997f68-a992-410f-839f-80a8fac64cb1-kube-api-access-8rf5x\") pod \"f5997f68-a992-410f-839f-80a8fac64cb1\" (UID: \"f5997f68-a992-410f-839f-80a8fac64cb1\") " Nov 24 09:15:03 crc kubenswrapper[4719]: I1124 09:15:03.371880 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5997f68-a992-410f-839f-80a8fac64cb1-config-volume\") pod \"f5997f68-a992-410f-839f-80a8fac64cb1\" (UID: \"f5997f68-a992-410f-839f-80a8fac64cb1\") " Nov 24 09:15:03 crc kubenswrapper[4719]: I1124 09:15:03.372089 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5997f68-a992-410f-839f-80a8fac64cb1-secret-volume\") pod \"f5997f68-a992-410f-839f-80a8fac64cb1\" (UID: \"f5997f68-a992-410f-839f-80a8fac64cb1\") " Nov 24 09:15:03 crc kubenswrapper[4719]: I1124 09:15:03.372629 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5997f68-a992-410f-839f-80a8fac64cb1-config-volume" (OuterVolumeSpecName: "config-volume") pod "f5997f68-a992-410f-839f-80a8fac64cb1" (UID: "f5997f68-a992-410f-839f-80a8fac64cb1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:15:03 crc kubenswrapper[4719]: I1124 09:15:03.378497 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5997f68-a992-410f-839f-80a8fac64cb1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f5997f68-a992-410f-839f-80a8fac64cb1" (UID: "f5997f68-a992-410f-839f-80a8fac64cb1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:15:03 crc kubenswrapper[4719]: I1124 09:15:03.378555 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5997f68-a992-410f-839f-80a8fac64cb1-kube-api-access-8rf5x" (OuterVolumeSpecName: "kube-api-access-8rf5x") pod "f5997f68-a992-410f-839f-80a8fac64cb1" (UID: "f5997f68-a992-410f-839f-80a8fac64cb1"). InnerVolumeSpecName "kube-api-access-8rf5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:15:03 crc kubenswrapper[4719]: I1124 09:15:03.474287 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rf5x\" (UniqueName: \"kubernetes.io/projected/f5997f68-a992-410f-839f-80a8fac64cb1-kube-api-access-8rf5x\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:03 crc kubenswrapper[4719]: I1124 09:15:03.474321 4719 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5997f68-a992-410f-839f-80a8fac64cb1-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:03 crc kubenswrapper[4719]: I1124 09:15:03.474334 4719 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5997f68-a992-410f-839f-80a8fac64cb1-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:03 crc kubenswrapper[4719]: I1124 09:15:03.926476 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" event={"ID":"f5997f68-a992-410f-839f-80a8fac64cb1","Type":"ContainerDied","Data":"48463ed5bff103d88f6d0033590f0dbb09c37bb330459f2ea620a764fc6617b6"} Nov 24 09:15:03 crc kubenswrapper[4719]: I1124 09:15:03.926735 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48463ed5bff103d88f6d0033590f0dbb09c37bb330459f2ea620a764fc6617b6" Nov 24 09:15:03 crc kubenswrapper[4719]: I1124 09:15:03.926504 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48" Nov 24 09:15:23 crc kubenswrapper[4719]: I1124 09:15:23.196221 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 09:15:32 crc kubenswrapper[4719]: I1124 09:15:32.133707 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:15:33 crc kubenswrapper[4719]: I1124 09:15:33.545648 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:15:36 crc kubenswrapper[4719]: I1124 09:15:36.483186 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" containerName="rabbitmq" containerID="cri-o://2025843f2b064751ea899f09fe18be435456181a7e51578d4179a023c9c5285a" gracePeriod=604796 Nov 24 09:15:37 crc kubenswrapper[4719]: I1124 09:15:37.726276 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="957bbc3c-6b1d-403a-a49d-6bafef454a48" containerName="rabbitmq" containerID="cri-o://8df49addefbb8d825f4d026208660434fff78a4016a2ed3403c96c97c46768b5" gracePeriod=604796 Nov 24 09:15:42 crc kubenswrapper[4719]: I1124 09:15:42.020835 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Nov 24 09:15:42 crc kubenswrapper[4719]: I1124 09:15:42.516455 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="957bbc3c-6b1d-403a-a49d-6bafef454a48" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.034656 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.168435 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv25k\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-kube-api-access-fv25k\") pod \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.168524 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-plugins\") pod \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.168555 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-erlang-cookie-secret\") pod \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.168576 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-plugins-conf\") pod \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.168595 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-server-conf\") pod \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.168632 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-pod-info\") pod \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.168654 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.168736 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-tls\") pod \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.168759 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-config-data\") pod \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.168789 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-erlang-cookie\") pod \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.168817 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-confd\") pod \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\" (UID: \"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b\") " Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.169639 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" (UID: "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.170355 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" (UID: "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.170757 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" (UID: "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.174730 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-kube-api-access-fv25k" (OuterVolumeSpecName: "kube-api-access-fv25k") pod "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" (UID: "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b"). InnerVolumeSpecName "kube-api-access-fv25k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.176946 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-pod-info" (OuterVolumeSpecName: "pod-info") pod "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" (UID: "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.178909 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" (UID: "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.179195 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" (UID: "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.201860 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" (UID: "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.205415 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-config-data" (OuterVolumeSpecName: "config-data") pod "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" (UID: "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.239745 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-server-conf" (OuterVolumeSpecName: "server-conf") pod "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" (UID: "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.271380 4719 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.271415 4719 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.271425 4719 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.271435 4719 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.271446 4719 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.271466 4719 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.271475 4719 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.271484 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.271493 4719 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.271502 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv25k\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-kube-api-access-fv25k\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.303307 4719 generic.go:334] "Generic (PLEG): container finished" podID="cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" containerID="2025843f2b064751ea899f09fe18be435456181a7e51578d4179a023c9c5285a" exitCode=0 Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.303361 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b","Type":"ContainerDied","Data":"2025843f2b064751ea899f09fe18be435456181a7e51578d4179a023c9c5285a"} Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.303387 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cdeb1000-5a68-42d6-af7a-e6c2ca85d94b","Type":"ContainerDied","Data":"7fb82ee214adef520f631e3249a024b92c3938fd053622d79fd98cabd7d70f77"} Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.303404 4719 scope.go:117] "RemoveContainer" containerID="2025843f2b064751ea899f09fe18be435456181a7e51578d4179a023c9c5285a" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.303612 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.317135 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" (UID: "cdeb1000-5a68-42d6-af7a-e6c2ca85d94b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.319784 4719 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.332985 4719 scope.go:117] "RemoveContainer" containerID="c1771d592dc655e20add14c4d69f4bd51d88a3a286a78df17a6a37061d930f05" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.357553 4719 scope.go:117] "RemoveContainer" containerID="2025843f2b064751ea899f09fe18be435456181a7e51578d4179a023c9c5285a" Nov 24 09:15:43 crc kubenswrapper[4719]: E1124 09:15:43.358358 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2025843f2b064751ea899f09fe18be435456181a7e51578d4179a023c9c5285a\": container with ID starting with 2025843f2b064751ea899f09fe18be435456181a7e51578d4179a023c9c5285a not found: ID does not exist" containerID="2025843f2b064751ea899f09fe18be435456181a7e51578d4179a023c9c5285a" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.358387 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2025843f2b064751ea899f09fe18be435456181a7e51578d4179a023c9c5285a"} err="failed to get container status \"2025843f2b064751ea899f09fe18be435456181a7e51578d4179a023c9c5285a\": rpc error: code = NotFound desc = could not find container \"2025843f2b064751ea899f09fe18be435456181a7e51578d4179a023c9c5285a\": container with ID starting with 2025843f2b064751ea899f09fe18be435456181a7e51578d4179a023c9c5285a not found: ID does not exist" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.358408 4719 scope.go:117] "RemoveContainer" containerID="c1771d592dc655e20add14c4d69f4bd51d88a3a286a78df17a6a37061d930f05" Nov 24 09:15:43 crc kubenswrapper[4719]: E1124 09:15:43.358615 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1771d592dc655e20add14c4d69f4bd51d88a3a286a78df17a6a37061d930f05\": container with ID starting with c1771d592dc655e20add14c4d69f4bd51d88a3a286a78df17a6a37061d930f05 not found: ID does not exist" containerID="c1771d592dc655e20add14c4d69f4bd51d88a3a286a78df17a6a37061d930f05" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.358640 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1771d592dc655e20add14c4d69f4bd51d88a3a286a78df17a6a37061d930f05"} err="failed to get container status \"c1771d592dc655e20add14c4d69f4bd51d88a3a286a78df17a6a37061d930f05\": rpc error: code = NotFound desc = could not find container \"c1771d592dc655e20add14c4d69f4bd51d88a3a286a78df17a6a37061d930f05\": container with ID starting with c1771d592dc655e20add14c4d69f4bd51d88a3a286a78df17a6a37061d930f05 not found: ID does not exist" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.375195 4719 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.375249 4719 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.641928 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.650714 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.675369 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:15:43 crc kubenswrapper[4719]: E1124 09:15:43.675832 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" containerName="rabbitmq" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.675857 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" containerName="rabbitmq" Nov 24 09:15:43 crc kubenswrapper[4719]: E1124 09:15:43.675877 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5997f68-a992-410f-839f-80a8fac64cb1" containerName="collect-profiles" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.675885 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5997f68-a992-410f-839f-80a8fac64cb1" containerName="collect-profiles" Nov 24 09:15:43 crc kubenswrapper[4719]: E1124 09:15:43.675908 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" containerName="setup-container" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.675916 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" containerName="setup-container" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.676145 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" containerName="rabbitmq" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.676179 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5997f68-a992-410f-839f-80a8fac64cb1" containerName="collect-profiles" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.677369 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.680065 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.680228 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-t99s2" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.680345 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.680474 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.680605 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.680712 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.682691 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.698761 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.781581 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg496\" (UniqueName: \"kubernetes.io/projected/576b0826-aefe-4ef2-b0f8-77e8d7811a29-kube-api-access-mg496\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.781622 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/576b0826-aefe-4ef2-b0f8-77e8d7811a29-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.781694 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/576b0826-aefe-4ef2-b0f8-77e8d7811a29-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.781765 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.781804 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/576b0826-aefe-4ef2-b0f8-77e8d7811a29-config-data\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.781916 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/576b0826-aefe-4ef2-b0f8-77e8d7811a29-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.781946 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/576b0826-aefe-4ef2-b0f8-77e8d7811a29-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.781981 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/576b0826-aefe-4ef2-b0f8-77e8d7811a29-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.781999 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/576b0826-aefe-4ef2-b0f8-77e8d7811a29-server-conf\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.782017 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/576b0826-aefe-4ef2-b0f8-77e8d7811a29-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.782052 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/576b0826-aefe-4ef2-b0f8-77e8d7811a29-pod-info\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.884239 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg496\" (UniqueName: \"kubernetes.io/projected/576b0826-aefe-4ef2-b0f8-77e8d7811a29-kube-api-access-mg496\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.884637 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/576b0826-aefe-4ef2-b0f8-77e8d7811a29-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.885511 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/576b0826-aefe-4ef2-b0f8-77e8d7811a29-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.885545 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.885572 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/576b0826-aefe-4ef2-b0f8-77e8d7811a29-config-data\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.885643 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/576b0826-aefe-4ef2-b0f8-77e8d7811a29-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.885672 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/576b0826-aefe-4ef2-b0f8-77e8d7811a29-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.885700 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/576b0826-aefe-4ef2-b0f8-77e8d7811a29-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.885725 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/576b0826-aefe-4ef2-b0f8-77e8d7811a29-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.885730 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.885746 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/576b0826-aefe-4ef2-b0f8-77e8d7811a29-server-conf\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.885772 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/576b0826-aefe-4ef2-b0f8-77e8d7811a29-pod-info\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.886295 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/576b0826-aefe-4ef2-b0f8-77e8d7811a29-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.886916 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/576b0826-aefe-4ef2-b0f8-77e8d7811a29-config-data\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.887161 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/576b0826-aefe-4ef2-b0f8-77e8d7811a29-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.887710 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/576b0826-aefe-4ef2-b0f8-77e8d7811a29-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.888534 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/576b0826-aefe-4ef2-b0f8-77e8d7811a29-server-conf\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.890172 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/576b0826-aefe-4ef2-b0f8-77e8d7811a29-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.890594 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/576b0826-aefe-4ef2-b0f8-77e8d7811a29-pod-info\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.894606 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/576b0826-aefe-4ef2-b0f8-77e8d7811a29-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.895012 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/576b0826-aefe-4ef2-b0f8-77e8d7811a29-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.906194 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg496\" (UniqueName: \"kubernetes.io/projected/576b0826-aefe-4ef2-b0f8-77e8d7811a29-kube-api-access-mg496\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:43 crc kubenswrapper[4719]: I1124 09:15:43.915606 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"576b0826-aefe-4ef2-b0f8-77e8d7811a29\") " pod="openstack/rabbitmq-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.084239 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.167282 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.298588 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-plugins-conf\") pod \"957bbc3c-6b1d-403a-a49d-6bafef454a48\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.298688 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/957bbc3c-6b1d-403a-a49d-6bafef454a48-erlang-cookie-secret\") pod \"957bbc3c-6b1d-403a-a49d-6bafef454a48\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.298750 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"957bbc3c-6b1d-403a-a49d-6bafef454a48\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.298813 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq86r\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-kube-api-access-qq86r\") pod \"957bbc3c-6b1d-403a-a49d-6bafef454a48\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.298840 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-config-data\") pod \"957bbc3c-6b1d-403a-a49d-6bafef454a48\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.298903 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-plugins\") pod \"957bbc3c-6b1d-403a-a49d-6bafef454a48\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.298946 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-server-conf\") pod \"957bbc3c-6b1d-403a-a49d-6bafef454a48\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.298977 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-erlang-cookie\") pod \"957bbc3c-6b1d-403a-a49d-6bafef454a48\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.299016 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-tls\") pod \"957bbc3c-6b1d-403a-a49d-6bafef454a48\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.299064 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/957bbc3c-6b1d-403a-a49d-6bafef454a48-pod-info\") pod \"957bbc3c-6b1d-403a-a49d-6bafef454a48\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.299085 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-confd\") pod \"957bbc3c-6b1d-403a-a49d-6bafef454a48\" (UID: \"957bbc3c-6b1d-403a-a49d-6bafef454a48\") " Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.301002 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "957bbc3c-6b1d-403a-a49d-6bafef454a48" (UID: "957bbc3c-6b1d-403a-a49d-6bafef454a48"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.301150 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "957bbc3c-6b1d-403a-a49d-6bafef454a48" (UID: "957bbc3c-6b1d-403a-a49d-6bafef454a48"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.301171 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "957bbc3c-6b1d-403a-a49d-6bafef454a48" (UID: "957bbc3c-6b1d-403a-a49d-6bafef454a48"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.304271 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "957bbc3c-6b1d-403a-a49d-6bafef454a48" (UID: "957bbc3c-6b1d-403a-a49d-6bafef454a48"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.317119 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/957bbc3c-6b1d-403a-a49d-6bafef454a48-pod-info" (OuterVolumeSpecName: "pod-info") pod "957bbc3c-6b1d-403a-a49d-6bafef454a48" (UID: "957bbc3c-6b1d-403a-a49d-6bafef454a48"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.317271 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "957bbc3c-6b1d-403a-a49d-6bafef454a48" (UID: "957bbc3c-6b1d-403a-a49d-6bafef454a48"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.318426 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957bbc3c-6b1d-403a-a49d-6bafef454a48-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "957bbc3c-6b1d-403a-a49d-6bafef454a48" (UID: "957bbc3c-6b1d-403a-a49d-6bafef454a48"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.327417 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-kube-api-access-qq86r" (OuterVolumeSpecName: "kube-api-access-qq86r") pod "957bbc3c-6b1d-403a-a49d-6bafef454a48" (UID: "957bbc3c-6b1d-403a-a49d-6bafef454a48"). InnerVolumeSpecName "kube-api-access-qq86r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.336326 4719 generic.go:334] "Generic (PLEG): container finished" podID="957bbc3c-6b1d-403a-a49d-6bafef454a48" containerID="8df49addefbb8d825f4d026208660434fff78a4016a2ed3403c96c97c46768b5" exitCode=0 Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.336363 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"957bbc3c-6b1d-403a-a49d-6bafef454a48","Type":"ContainerDied","Data":"8df49addefbb8d825f4d026208660434fff78a4016a2ed3403c96c97c46768b5"} Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.336438 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.336458 4719 scope.go:117] "RemoveContainer" containerID="8df49addefbb8d825f4d026208660434fff78a4016a2ed3403c96c97c46768b5" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.336446 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"957bbc3c-6b1d-403a-a49d-6bafef454a48","Type":"ContainerDied","Data":"cf5838a65504bea7c2c0f62a30752532cf09a5cfa9e0a03e94c2ddfb517c14fc"} Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.337512 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-config-data" (OuterVolumeSpecName: "config-data") pod "957bbc3c-6b1d-403a-a49d-6bafef454a48" (UID: "957bbc3c-6b1d-403a-a49d-6bafef454a48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.382599 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-server-conf" (OuterVolumeSpecName: "server-conf") pod "957bbc3c-6b1d-403a-a49d-6bafef454a48" (UID: "957bbc3c-6b1d-403a-a49d-6bafef454a48"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.388180 4719 scope.go:117] "RemoveContainer" containerID="4a8f0a27407ab24ce981574879de37159dee198c2d64e89bb371cc30f0da695d" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.400617 4719 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.400652 4719 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/957bbc3c-6b1d-403a-a49d-6bafef454a48-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.400675 4719 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.400686 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq86r\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-kube-api-access-qq86r\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.400696 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.400705 4719 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.400713 4719 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/957bbc3c-6b1d-403a-a49d-6bafef454a48-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.400749 4719 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.400758 4719 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.400765 4719 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/957bbc3c-6b1d-403a-a49d-6bafef454a48-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.426005 4719 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.434807 4719 scope.go:117] "RemoveContainer" containerID="8df49addefbb8d825f4d026208660434fff78a4016a2ed3403c96c97c46768b5" Nov 24 09:15:44 crc kubenswrapper[4719]: E1124 09:15:44.435291 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8df49addefbb8d825f4d026208660434fff78a4016a2ed3403c96c97c46768b5\": container with ID starting with 8df49addefbb8d825f4d026208660434fff78a4016a2ed3403c96c97c46768b5 not found: ID does not exist" containerID="8df49addefbb8d825f4d026208660434fff78a4016a2ed3403c96c97c46768b5" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.435331 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df49addefbb8d825f4d026208660434fff78a4016a2ed3403c96c97c46768b5"} err="failed to get container status \"8df49addefbb8d825f4d026208660434fff78a4016a2ed3403c96c97c46768b5\": rpc error: code = NotFound desc = could not find container \"8df49addefbb8d825f4d026208660434fff78a4016a2ed3403c96c97c46768b5\": container with ID starting with 8df49addefbb8d825f4d026208660434fff78a4016a2ed3403c96c97c46768b5 not found: ID does not exist" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.435352 4719 scope.go:117] "RemoveContainer" containerID="4a8f0a27407ab24ce981574879de37159dee198c2d64e89bb371cc30f0da695d" Nov 24 09:15:44 crc kubenswrapper[4719]: E1124 09:15:44.435591 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a8f0a27407ab24ce981574879de37159dee198c2d64e89bb371cc30f0da695d\": container with ID starting with 4a8f0a27407ab24ce981574879de37159dee198c2d64e89bb371cc30f0da695d not found: ID does not exist" containerID="4a8f0a27407ab24ce981574879de37159dee198c2d64e89bb371cc30f0da695d" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.435612 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8f0a27407ab24ce981574879de37159dee198c2d64e89bb371cc30f0da695d"} err="failed to get container status \"4a8f0a27407ab24ce981574879de37159dee198c2d64e89bb371cc30f0da695d\": rpc error: code = NotFound desc = could not find container \"4a8f0a27407ab24ce981574879de37159dee198c2d64e89bb371cc30f0da695d\": container with ID starting with 4a8f0a27407ab24ce981574879de37159dee198c2d64e89bb371cc30f0da695d not found: ID does not exist" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.448236 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "957bbc3c-6b1d-403a-a49d-6bafef454a48" (UID: "957bbc3c-6b1d-403a-a49d-6bafef454a48"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.502562 4719 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/957bbc3c-6b1d-403a-a49d-6bafef454a48-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.502597 4719 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.530885 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdeb1000-5a68-42d6-af7a-e6c2ca85d94b" path="/var/lib/kubelet/pods/cdeb1000-5a68-42d6-af7a-e6c2ca85d94b/volumes" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.596594 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.663092 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.671423 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.689662 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:15:44 crc kubenswrapper[4719]: E1124 09:15:44.690060 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="957bbc3c-6b1d-403a-a49d-6bafef454a48" containerName="setup-container" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.690076 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="957bbc3c-6b1d-403a-a49d-6bafef454a48" containerName="setup-container" Nov 24 09:15:44 crc kubenswrapper[4719]: E1124 09:15:44.690103 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="957bbc3c-6b1d-403a-a49d-6bafef454a48" containerName="rabbitmq" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.690109 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="957bbc3c-6b1d-403a-a49d-6bafef454a48" containerName="rabbitmq" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.690276 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="957bbc3c-6b1d-403a-a49d-6bafef454a48" containerName="rabbitmq" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.691174 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.694011 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.696746 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.696825 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.696962 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.697236 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9zhq9" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.697294 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.697496 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.718275 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.808254 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdc73497-dc8e-44ef-b146-be6598f87eec-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.808316 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdc73497-dc8e-44ef-b146-be6598f87eec-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.808374 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b426l\" (UniqueName: \"kubernetes.io/projected/cdc73497-dc8e-44ef-b146-be6598f87eec-kube-api-access-b426l\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.808396 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdc73497-dc8e-44ef-b146-be6598f87eec-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.808414 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.808428 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdc73497-dc8e-44ef-b146-be6598f87eec-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.808513 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdc73497-dc8e-44ef-b146-be6598f87eec-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.808539 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdc73497-dc8e-44ef-b146-be6598f87eec-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.808583 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdc73497-dc8e-44ef-b146-be6598f87eec-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.808605 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdc73497-dc8e-44ef-b146-be6598f87eec-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.808620 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdc73497-dc8e-44ef-b146-be6598f87eec-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.910189 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdc73497-dc8e-44ef-b146-be6598f87eec-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.910245 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdc73497-dc8e-44ef-b146-be6598f87eec-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.910272 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdc73497-dc8e-44ef-b146-be6598f87eec-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.910297 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdc73497-dc8e-44ef-b146-be6598f87eec-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.910330 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdc73497-dc8e-44ef-b146-be6598f87eec-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.910389 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b426l\" (UniqueName: \"kubernetes.io/projected/cdc73497-dc8e-44ef-b146-be6598f87eec-kube-api-access-b426l\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.910415 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdc73497-dc8e-44ef-b146-be6598f87eec-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.910440 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.910460 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdc73497-dc8e-44ef-b146-be6598f87eec-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.910527 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdc73497-dc8e-44ef-b146-be6598f87eec-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.910554 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdc73497-dc8e-44ef-b146-be6598f87eec-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.911691 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.911925 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cdc73497-dc8e-44ef-b146-be6598f87eec-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.912362 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cdc73497-dc8e-44ef-b146-be6598f87eec-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.912693 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cdc73497-dc8e-44ef-b146-be6598f87eec-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.912904 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cdc73497-dc8e-44ef-b146-be6598f87eec-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.913537 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cdc73497-dc8e-44ef-b146-be6598f87eec-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.914393 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cdc73497-dc8e-44ef-b146-be6598f87eec-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.915313 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cdc73497-dc8e-44ef-b146-be6598f87eec-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.924575 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cdc73497-dc8e-44ef-b146-be6598f87eec-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.928562 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cdc73497-dc8e-44ef-b146-be6598f87eec-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.950541 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b426l\" (UniqueName: \"kubernetes.io/projected/cdc73497-dc8e-44ef-b146-be6598f87eec-kube-api-access-b426l\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:44 crc kubenswrapper[4719]: I1124 09:15:44.960574 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cdc73497-dc8e-44ef-b146-be6598f87eec\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:45 crc kubenswrapper[4719]: I1124 09:15:45.022979 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:15:45 crc kubenswrapper[4719]: I1124 09:15:45.348214 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"576b0826-aefe-4ef2-b0f8-77e8d7811a29","Type":"ContainerStarted","Data":"8f122edf069062e3f5229db9c3edc49272e2e5713455f96ecc5ce898f86d8813"} Nov 24 09:15:45 crc kubenswrapper[4719]: I1124 09:15:45.481585 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 09:15:45 crc kubenswrapper[4719]: W1124 09:15:45.514125 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdc73497_dc8e_44ef_b146_be6598f87eec.slice/crio-a359ce0a98c40bc6dc4c067dbb4179aea35a8b0de93f1fbce13c4bf332dfaa9f WatchSource:0}: Error finding container a359ce0a98c40bc6dc4c067dbb4179aea35a8b0de93f1fbce13c4bf332dfaa9f: Status 404 returned error can't find the container with id a359ce0a98c40bc6dc4c067dbb4179aea35a8b0de93f1fbce13c4bf332dfaa9f Nov 24 09:15:46 crc kubenswrapper[4719]: I1124 09:15:46.356816 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cdc73497-dc8e-44ef-b146-be6598f87eec","Type":"ContainerStarted","Data":"a359ce0a98c40bc6dc4c067dbb4179aea35a8b0de93f1fbce13c4bf332dfaa9f"} Nov 24 09:15:46 crc kubenswrapper[4719]: I1124 09:15:46.358304 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"576b0826-aefe-4ef2-b0f8-77e8d7811a29","Type":"ContainerStarted","Data":"d62eef0c591aef72dffd80e7336949e6ba3fe4914a01b3e64e4e7023a12e2f3c"} Nov 24 09:15:46 crc kubenswrapper[4719]: I1124 09:15:46.532329 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="957bbc3c-6b1d-403a-a49d-6bafef454a48" path="/var/lib/kubelet/pods/957bbc3c-6b1d-403a-a49d-6bafef454a48/volumes" Nov 24 09:15:47 crc kubenswrapper[4719]: I1124 09:15:47.367166 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cdc73497-dc8e-44ef-b146-be6598f87eec","Type":"ContainerStarted","Data":"692bd5d810415260b8afd88ee9e22826c21d17581c021291e49e491617c1a792"} Nov 24 09:15:47 crc kubenswrapper[4719]: I1124 09:15:47.924560 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qvhhk"] Nov 24 09:15:47 crc kubenswrapper[4719]: I1124 09:15:47.926641 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:47 crc kubenswrapper[4719]: I1124 09:15:47.936559 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 24 09:15:47 crc kubenswrapper[4719]: I1124 09:15:47.946661 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qvhhk"] Nov 24 09:15:47 crc kubenswrapper[4719]: I1124 09:15:47.967294 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlk9l\" (UniqueName: \"kubernetes.io/projected/c187c8dd-cf83-454c-8b07-57733094f79e-kube-api-access-hlk9l\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:47 crc kubenswrapper[4719]: I1124 09:15:47.967340 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-dns-svc\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:47 crc kubenswrapper[4719]: I1124 09:15:47.967370 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:47 crc kubenswrapper[4719]: I1124 09:15:47.967399 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:47 crc kubenswrapper[4719]: I1124 09:15:47.967474 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-config\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:47 crc kubenswrapper[4719]: I1124 09:15:47.967506 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:48 crc kubenswrapper[4719]: I1124 09:15:48.068426 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-config\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:48 crc kubenswrapper[4719]: I1124 09:15:48.068491 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:48 crc kubenswrapper[4719]: I1124 09:15:48.068528 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlk9l\" (UniqueName: \"kubernetes.io/projected/c187c8dd-cf83-454c-8b07-57733094f79e-kube-api-access-hlk9l\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:48 crc kubenswrapper[4719]: I1124 09:15:48.068552 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-dns-svc\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:48 crc kubenswrapper[4719]: I1124 09:15:48.068575 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:48 crc kubenswrapper[4719]: I1124 09:15:48.068600 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:48 crc kubenswrapper[4719]: I1124 09:15:48.069386 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:48 crc kubenswrapper[4719]: I1124 09:15:48.069883 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-config\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:48 crc kubenswrapper[4719]: I1124 09:15:48.070393 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:48 crc kubenswrapper[4719]: I1124 09:15:48.071153 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-dns-svc\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:48 crc kubenswrapper[4719]: I1124 09:15:48.071620 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:48 crc kubenswrapper[4719]: I1124 09:15:48.091951 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlk9l\" (UniqueName: \"kubernetes.io/projected/c187c8dd-cf83-454c-8b07-57733094f79e-kube-api-access-hlk9l\") pod \"dnsmasq-dns-578b8d767c-qvhhk\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:48 crc kubenswrapper[4719]: I1124 09:15:48.250932 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:48 crc kubenswrapper[4719]: I1124 09:15:48.721622 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qvhhk"] Nov 24 09:15:48 crc kubenswrapper[4719]: W1124 09:15:48.753611 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc187c8dd_cf83_454c_8b07_57733094f79e.slice/crio-8dbf31dcd8c7b5a731e9a5b9de4eef2a03d6d4e6086267d32e8eb813da19fb80 WatchSource:0}: Error finding container 8dbf31dcd8c7b5a731e9a5b9de4eef2a03d6d4e6086267d32e8eb813da19fb80: Status 404 returned error can't find the container with id 8dbf31dcd8c7b5a731e9a5b9de4eef2a03d6d4e6086267d32e8eb813da19fb80 Nov 24 09:15:49 crc kubenswrapper[4719]: I1124 09:15:49.395327 4719 generic.go:334] "Generic (PLEG): container finished" podID="c187c8dd-cf83-454c-8b07-57733094f79e" containerID="6622fe096d4c8865a58006dcf037a110f5f6266451209f12c2cdb2e841e78dc1" exitCode=0 Nov 24 09:15:49 crc kubenswrapper[4719]: I1124 09:15:49.395366 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" event={"ID":"c187c8dd-cf83-454c-8b07-57733094f79e","Type":"ContainerDied","Data":"6622fe096d4c8865a58006dcf037a110f5f6266451209f12c2cdb2e841e78dc1"} Nov 24 09:15:49 crc kubenswrapper[4719]: I1124 09:15:49.395429 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" event={"ID":"c187c8dd-cf83-454c-8b07-57733094f79e","Type":"ContainerStarted","Data":"8dbf31dcd8c7b5a731e9a5b9de4eef2a03d6d4e6086267d32e8eb813da19fb80"} Nov 24 09:15:50 crc kubenswrapper[4719]: I1124 09:15:50.411449 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" event={"ID":"c187c8dd-cf83-454c-8b07-57733094f79e","Type":"ContainerStarted","Data":"fd3b07468bd69b9633007dfddaa64d7c23e262fba3cbfaced4608d1f2af87440"} Nov 24 09:15:50 crc kubenswrapper[4719]: I1124 09:15:50.411804 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:50 crc kubenswrapper[4719]: I1124 09:15:50.453730 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" podStartSLOduration=3.45370803 podStartE2EDuration="3.45370803s" podCreationTimestamp="2025-11-24 09:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:15:50.451616149 +0000 UTC m=+1326.782889411" watchObservedRunningTime="2025-11-24 09:15:50.45370803 +0000 UTC m=+1326.784981282" Nov 24 09:15:52 crc kubenswrapper[4719]: E1124 09:15:52.754681 4719 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957bbc3c_6b1d_403a_a49d_6bafef454a48.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957bbc3c_6b1d_403a_a49d_6bafef454a48.slice/crio-cf5838a65504bea7c2c0f62a30752532cf09a5cfa9e0a03e94c2ddfb517c14fc\": RecentStats: unable to find data in memory cache]" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.252221 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.319638 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-7nc5m"] Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.319851 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" podUID="fd1ef8b1-96f2-488a-aa4d-de553fa73425" containerName="dnsmasq-dns" containerID="cri-o://fae42a51fc3c74ecfbe7893972022bc7fb115d666cb1d439138b2b2ff744b504" gracePeriod=10 Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.493334 4719 generic.go:334] "Generic (PLEG): container finished" podID="fd1ef8b1-96f2-488a-aa4d-de553fa73425" containerID="fae42a51fc3c74ecfbe7893972022bc7fb115d666cb1d439138b2b2ff744b504" exitCode=0 Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.493389 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" event={"ID":"fd1ef8b1-96f2-488a-aa4d-de553fa73425","Type":"ContainerDied","Data":"fae42a51fc3c74ecfbe7893972022bc7fb115d666cb1d439138b2b2ff744b504"} Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.531511 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-w52mp"] Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.533092 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.553460 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-w52mp"] Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.564083 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-ovsdbserver-nb\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.564652 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-ovsdbserver-sb\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.564738 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-config\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.564764 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfvxh\" (UniqueName: \"kubernetes.io/projected/b6c26c2d-008f-4cc0-99db-80a8e21c3537-kube-api-access-sfvxh\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.564822 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-openstack-edpm-ipam\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.564861 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-dns-svc\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.667111 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-dns-svc\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.667188 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-ovsdbserver-nb\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.667293 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-ovsdbserver-sb\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.667336 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-config\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.667357 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfvxh\" (UniqueName: \"kubernetes.io/projected/b6c26c2d-008f-4cc0-99db-80a8e21c3537-kube-api-access-sfvxh\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.667385 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-openstack-edpm-ipam\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.668313 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-openstack-edpm-ipam\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.668591 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-ovsdbserver-sb\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.668683 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-config\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.668734 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-ovsdbserver-nb\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.669398 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-dns-svc\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.707200 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfvxh\" (UniqueName: \"kubernetes.io/projected/b6c26c2d-008f-4cc0-99db-80a8e21c3537-kube-api-access-sfvxh\") pod \"dnsmasq-dns-667ff9c869-w52mp\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.860723 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.940513 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.973204 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-ovsdbserver-sb\") pod \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.973286 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-config\") pod \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.973330 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-ovsdbserver-nb\") pod \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.973540 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-dns-svc\") pod \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.973591 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pwd7\" (UniqueName: \"kubernetes.io/projected/fd1ef8b1-96f2-488a-aa4d-de553fa73425-kube-api-access-8pwd7\") pod \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " Nov 24 09:15:58 crc kubenswrapper[4719]: I1124 09:15:58.981222 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd1ef8b1-96f2-488a-aa4d-de553fa73425-kube-api-access-8pwd7" (OuterVolumeSpecName: "kube-api-access-8pwd7") pod "fd1ef8b1-96f2-488a-aa4d-de553fa73425" (UID: "fd1ef8b1-96f2-488a-aa4d-de553fa73425"). InnerVolumeSpecName "kube-api-access-8pwd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.075927 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd1ef8b1-96f2-488a-aa4d-de553fa73425" (UID: "fd1ef8b1-96f2-488a-aa4d-de553fa73425"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.076298 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-dns-svc\") pod \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\" (UID: \"fd1ef8b1-96f2-488a-aa4d-de553fa73425\") " Nov 24 09:15:59 crc kubenswrapper[4719]: W1124 09:15:59.077202 4719 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fd1ef8b1-96f2-488a-aa4d-de553fa73425/volumes/kubernetes.io~configmap/dns-svc Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.077225 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd1ef8b1-96f2-488a-aa4d-de553fa73425" (UID: "fd1ef8b1-96f2-488a-aa4d-de553fa73425"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.077303 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pwd7\" (UniqueName: \"kubernetes.io/projected/fd1ef8b1-96f2-488a-aa4d-de553fa73425-kube-api-access-8pwd7\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.077319 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.079591 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fd1ef8b1-96f2-488a-aa4d-de553fa73425" (UID: "fd1ef8b1-96f2-488a-aa4d-de553fa73425"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.079864 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fd1ef8b1-96f2-488a-aa4d-de553fa73425" (UID: "fd1ef8b1-96f2-488a-aa4d-de553fa73425"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.082292 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-config" (OuterVolumeSpecName: "config") pod "fd1ef8b1-96f2-488a-aa4d-de553fa73425" (UID: "fd1ef8b1-96f2-488a-aa4d-de553fa73425"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.180104 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.180150 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.180163 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd1ef8b1-96f2-488a-aa4d-de553fa73425-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.387159 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-w52mp"] Nov 24 09:15:59 crc kubenswrapper[4719]: W1124 09:15:59.390403 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c26c2d_008f_4cc0_99db_80a8e21c3537.slice/crio-a992d898d40d1db99888a165dee7a4475738009285c04be069422dcd2d9971c3 WatchSource:0}: Error finding container a992d898d40d1db99888a165dee7a4475738009285c04be069422dcd2d9971c3: Status 404 returned error can't find the container with id a992d898d40d1db99888a165dee7a4475738009285c04be069422dcd2d9971c3 Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.507687 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-w52mp" event={"ID":"b6c26c2d-008f-4cc0-99db-80a8e21c3537","Type":"ContainerStarted","Data":"a992d898d40d1db99888a165dee7a4475738009285c04be069422dcd2d9971c3"} Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.510423 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" event={"ID":"fd1ef8b1-96f2-488a-aa4d-de553fa73425","Type":"ContainerDied","Data":"3785f3270577fcbc8845ef71dcfe44659f86f850093ed63968ca4621ab1cb9f3"} Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.510464 4719 scope.go:117] "RemoveContainer" containerID="fae42a51fc3c74ecfbe7893972022bc7fb115d666cb1d439138b2b2ff744b504" Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.510603 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-7nc5m" Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.558512 4719 scope.go:117] "RemoveContainer" containerID="0955e9090d494ea81143f7eab3e78019a9a733e2999196e5a6efb61fec9ff4e0" Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.564724 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-7nc5m"] Nov 24 09:15:59 crc kubenswrapper[4719]: I1124 09:15:59.572094 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-7nc5m"] Nov 24 09:16:00 crc kubenswrapper[4719]: I1124 09:16:00.519713 4719 generic.go:334] "Generic (PLEG): container finished" podID="b6c26c2d-008f-4cc0-99db-80a8e21c3537" containerID="f4b7e08190f6fa8eb96de5922658ef441d2ae5f43438926bb2ce222e1923dcff" exitCode=0 Nov 24 09:16:00 crc kubenswrapper[4719]: I1124 09:16:00.537871 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd1ef8b1-96f2-488a-aa4d-de553fa73425" path="/var/lib/kubelet/pods/fd1ef8b1-96f2-488a-aa4d-de553fa73425/volumes" Nov 24 09:16:00 crc kubenswrapper[4719]: I1124 09:16:00.539151 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-w52mp" event={"ID":"b6c26c2d-008f-4cc0-99db-80a8e21c3537","Type":"ContainerDied","Data":"f4b7e08190f6fa8eb96de5922658ef441d2ae5f43438926bb2ce222e1923dcff"} Nov 24 09:16:01 crc kubenswrapper[4719]: I1124 09:16:01.531538 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-w52mp" event={"ID":"b6c26c2d-008f-4cc0-99db-80a8e21c3537","Type":"ContainerStarted","Data":"002d69b3127f9d9e7e69a921e37d6204154e51a4f0968b4f125e99db349e63b0"} Nov 24 09:16:01 crc kubenswrapper[4719]: I1124 09:16:01.532767 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:16:01 crc kubenswrapper[4719]: I1124 09:16:01.560425 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-667ff9c869-w52mp" podStartSLOduration=3.56040792 podStartE2EDuration="3.56040792s" podCreationTimestamp="2025-11-24 09:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:16:01.552182503 +0000 UTC m=+1337.883455775" watchObservedRunningTime="2025-11-24 09:16:01.56040792 +0000 UTC m=+1337.891681172" Nov 24 09:16:02 crc kubenswrapper[4719]: E1124 09:16:02.988589 4719 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957bbc3c_6b1d_403a_a49d_6bafef454a48.slice/crio-cf5838a65504bea7c2c0f62a30752532cf09a5cfa9e0a03e94c2ddfb517c14fc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957bbc3c_6b1d_403a_a49d_6bafef454a48.slice\": RecentStats: unable to find data in memory cache]" Nov 24 09:16:04 crc kubenswrapper[4719]: I1124 09:16:04.561913 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:16:04 crc kubenswrapper[4719]: I1124 09:16:04.561971 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:16:08 crc kubenswrapper[4719]: I1124 09:16:08.863359 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:16:08 crc kubenswrapper[4719]: I1124 09:16:08.935510 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qvhhk"] Nov 24 09:16:08 crc kubenswrapper[4719]: I1124 09:16:08.935775 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" podUID="c187c8dd-cf83-454c-8b07-57733094f79e" containerName="dnsmasq-dns" containerID="cri-o://fd3b07468bd69b9633007dfddaa64d7c23e262fba3cbfaced4608d1f2af87440" gracePeriod=10 Nov 24 09:16:09 crc kubenswrapper[4719]: I1124 09:16:09.606466 4719 generic.go:334] "Generic (PLEG): container finished" podID="c187c8dd-cf83-454c-8b07-57733094f79e" containerID="fd3b07468bd69b9633007dfddaa64d7c23e262fba3cbfaced4608d1f2af87440" exitCode=0 Nov 24 09:16:09 crc kubenswrapper[4719]: I1124 09:16:09.606515 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" event={"ID":"c187c8dd-cf83-454c-8b07-57733094f79e","Type":"ContainerDied","Data":"fd3b07468bd69b9633007dfddaa64d7c23e262fba3cbfaced4608d1f2af87440"} Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.010755 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.179301 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-dns-svc\") pod \"c187c8dd-cf83-454c-8b07-57733094f79e\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.180325 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlk9l\" (UniqueName: \"kubernetes.io/projected/c187c8dd-cf83-454c-8b07-57733094f79e-kube-api-access-hlk9l\") pod \"c187c8dd-cf83-454c-8b07-57733094f79e\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.180394 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-ovsdbserver-nb\") pod \"c187c8dd-cf83-454c-8b07-57733094f79e\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.180495 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-openstack-edpm-ipam\") pod \"c187c8dd-cf83-454c-8b07-57733094f79e\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.180536 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-ovsdbserver-sb\") pod \"c187c8dd-cf83-454c-8b07-57733094f79e\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.180585 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-config\") pod \"c187c8dd-cf83-454c-8b07-57733094f79e\" (UID: \"c187c8dd-cf83-454c-8b07-57733094f79e\") " Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.187644 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c187c8dd-cf83-454c-8b07-57733094f79e-kube-api-access-hlk9l" (OuterVolumeSpecName: "kube-api-access-hlk9l") pod "c187c8dd-cf83-454c-8b07-57733094f79e" (UID: "c187c8dd-cf83-454c-8b07-57733094f79e"). InnerVolumeSpecName "kube-api-access-hlk9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.231775 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c187c8dd-cf83-454c-8b07-57733094f79e" (UID: "c187c8dd-cf83-454c-8b07-57733094f79e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.242361 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-config" (OuterVolumeSpecName: "config") pod "c187c8dd-cf83-454c-8b07-57733094f79e" (UID: "c187c8dd-cf83-454c-8b07-57733094f79e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.244687 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c187c8dd-cf83-454c-8b07-57733094f79e" (UID: "c187c8dd-cf83-454c-8b07-57733094f79e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.248169 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c187c8dd-cf83-454c-8b07-57733094f79e" (UID: "c187c8dd-cf83-454c-8b07-57733094f79e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.261696 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c187c8dd-cf83-454c-8b07-57733094f79e" (UID: "c187c8dd-cf83-454c-8b07-57733094f79e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.283605 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.283995 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlk9l\" (UniqueName: \"kubernetes.io/projected/c187c8dd-cf83-454c-8b07-57733094f79e-kube-api-access-hlk9l\") on node \"crc\" DevicePath \"\"" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.284009 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.284020 4719 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.284030 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.284055 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c187c8dd-cf83-454c-8b07-57733094f79e-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.615829 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" event={"ID":"c187c8dd-cf83-454c-8b07-57733094f79e","Type":"ContainerDied","Data":"8dbf31dcd8c7b5a731e9a5b9de4eef2a03d6d4e6086267d32e8eb813da19fb80"} Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.615884 4719 scope.go:117] "RemoveContainer" containerID="fd3b07468bd69b9633007dfddaa64d7c23e262fba3cbfaced4608d1f2af87440" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.615886 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-qvhhk" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.640962 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qvhhk"] Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.645820 4719 scope.go:117] "RemoveContainer" containerID="6622fe096d4c8865a58006dcf037a110f5f6266451209f12c2cdb2e841e78dc1" Nov 24 09:16:10 crc kubenswrapper[4719]: I1124 09:16:10.649486 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-qvhhk"] Nov 24 09:16:12 crc kubenswrapper[4719]: I1124 09:16:12.535517 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c187c8dd-cf83-454c-8b07-57733094f79e" path="/var/lib/kubelet/pods/c187c8dd-cf83-454c-8b07-57733094f79e/volumes" Nov 24 09:16:13 crc kubenswrapper[4719]: E1124 09:16:13.217346 4719 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957bbc3c_6b1d_403a_a49d_6bafef454a48.slice/crio-cf5838a65504bea7c2c0f62a30752532cf09a5cfa9e0a03e94c2ddfb517c14fc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957bbc3c_6b1d_403a_a49d_6bafef454a48.slice\": RecentStats: unable to find data in memory cache]" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.791421 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg"] Nov 24 09:16:14 crc kubenswrapper[4719]: E1124 09:16:14.792081 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c187c8dd-cf83-454c-8b07-57733094f79e" containerName="dnsmasq-dns" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.792094 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="c187c8dd-cf83-454c-8b07-57733094f79e" containerName="dnsmasq-dns" Nov 24 09:16:14 crc kubenswrapper[4719]: E1124 09:16:14.792139 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd1ef8b1-96f2-488a-aa4d-de553fa73425" containerName="dnsmasq-dns" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.792145 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd1ef8b1-96f2-488a-aa4d-de553fa73425" containerName="dnsmasq-dns" Nov 24 09:16:14 crc kubenswrapper[4719]: E1124 09:16:14.792153 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd1ef8b1-96f2-488a-aa4d-de553fa73425" containerName="init" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.792159 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd1ef8b1-96f2-488a-aa4d-de553fa73425" containerName="init" Nov 24 09:16:14 crc kubenswrapper[4719]: E1124 09:16:14.792167 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c187c8dd-cf83-454c-8b07-57733094f79e" containerName="init" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.792172 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="c187c8dd-cf83-454c-8b07-57733094f79e" containerName="init" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.792375 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="c187c8dd-cf83-454c-8b07-57733094f79e" containerName="dnsmasq-dns" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.792399 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd1ef8b1-96f2-488a-aa4d-de553fa73425" containerName="dnsmasq-dns" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.793222 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.798067 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.799716 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.801613 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.802313 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg"] Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.803716 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.820418 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.820496 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.820553 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfcnp\" (UniqueName: \"kubernetes.io/projected/87710985-771e-4a43-a5d1-4933e8fc0ecf-kube-api-access-mfcnp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.820597 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.921800 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.921890 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.921943 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfcnp\" (UniqueName: \"kubernetes.io/projected/87710985-771e-4a43-a5d1-4933e8fc0ecf-kube-api-access-mfcnp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.921991 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.928056 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.930430 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.931773 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:14 crc kubenswrapper[4719]: I1124 09:16:14.942251 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfcnp\" (UniqueName: \"kubernetes.io/projected/87710985-771e-4a43-a5d1-4933e8fc0ecf-kube-api-access-mfcnp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:15 crc kubenswrapper[4719]: I1124 09:16:15.113986 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:15 crc kubenswrapper[4719]: I1124 09:16:15.682228 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg"] Nov 24 09:16:15 crc kubenswrapper[4719]: W1124 09:16:15.688768 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87710985_771e_4a43_a5d1_4933e8fc0ecf.slice/crio-517d882160920e9c3bfccd9d367ad8fd07fe990fb75d88c08a594fc5f5f66a4f WatchSource:0}: Error finding container 517d882160920e9c3bfccd9d367ad8fd07fe990fb75d88c08a594fc5f5f66a4f: Status 404 returned error can't find the container with id 517d882160920e9c3bfccd9d367ad8fd07fe990fb75d88c08a594fc5f5f66a4f Nov 24 09:16:16 crc kubenswrapper[4719]: I1124 09:16:16.672606 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" event={"ID":"87710985-771e-4a43-a5d1-4933e8fc0ecf","Type":"ContainerStarted","Data":"517d882160920e9c3bfccd9d367ad8fd07fe990fb75d88c08a594fc5f5f66a4f"} Nov 24 09:16:18 crc kubenswrapper[4719]: I1124 09:16:18.695709 4719 generic.go:334] "Generic (PLEG): container finished" podID="576b0826-aefe-4ef2-b0f8-77e8d7811a29" containerID="d62eef0c591aef72dffd80e7336949e6ba3fe4914a01b3e64e4e7023a12e2f3c" exitCode=0 Nov 24 09:16:18 crc kubenswrapper[4719]: I1124 09:16:18.695700 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"576b0826-aefe-4ef2-b0f8-77e8d7811a29","Type":"ContainerDied","Data":"d62eef0c591aef72dffd80e7336949e6ba3fe4914a01b3e64e4e7023a12e2f3c"} Nov 24 09:16:19 crc kubenswrapper[4719]: I1124 09:16:19.704857 4719 generic.go:334] "Generic (PLEG): container finished" podID="cdc73497-dc8e-44ef-b146-be6598f87eec" containerID="692bd5d810415260b8afd88ee9e22826c21d17581c021291e49e491617c1a792" exitCode=0 Nov 24 09:16:19 crc kubenswrapper[4719]: I1124 09:16:19.705503 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cdc73497-dc8e-44ef-b146-be6598f87eec","Type":"ContainerDied","Data":"692bd5d810415260b8afd88ee9e22826c21d17581c021291e49e491617c1a792"} Nov 24 09:16:19 crc kubenswrapper[4719]: I1124 09:16:19.709685 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"576b0826-aefe-4ef2-b0f8-77e8d7811a29","Type":"ContainerStarted","Data":"b90956cbcd6e8eebb3733e15db01568091ff7e9a84d3359049e14cd240ec1d8f"} Nov 24 09:16:19 crc kubenswrapper[4719]: I1124 09:16:19.709897 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 09:16:23 crc kubenswrapper[4719]: E1124 09:16:23.448811 4719 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957bbc3c_6b1d_403a_a49d_6bafef454a48.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957bbc3c_6b1d_403a_a49d_6bafef454a48.slice/crio-cf5838a65504bea7c2c0f62a30752532cf09a5cfa9e0a03e94c2ddfb517c14fc\": RecentStats: unable to find data in memory cache]" Nov 24 09:16:24 crc kubenswrapper[4719]: I1124 09:16:24.549674 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.549629115 podStartE2EDuration="41.549629115s" podCreationTimestamp="2025-11-24 09:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:16:19.750829008 +0000 UTC m=+1356.082102280" watchObservedRunningTime="2025-11-24 09:16:24.549629115 +0000 UTC m=+1360.880902367" Nov 24 09:16:26 crc kubenswrapper[4719]: I1124 09:16:26.779083 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" event={"ID":"87710985-771e-4a43-a5d1-4933e8fc0ecf","Type":"ContainerStarted","Data":"e3a56ea6a41bc25619f8e603ea1affc41f06dc90a2b14c6687009e8de23f33f6"} Nov 24 09:16:26 crc kubenswrapper[4719]: I1124 09:16:26.781394 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cdc73497-dc8e-44ef-b146-be6598f87eec","Type":"ContainerStarted","Data":"3abc0da5be4352541a0795723d34375dd2d59868ca1c2562e646990c3e522f58"} Nov 24 09:16:26 crc kubenswrapper[4719]: I1124 09:16:26.782022 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:16:26 crc kubenswrapper[4719]: I1124 09:16:26.805600 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" podStartSLOduration=1.9956429820000001 podStartE2EDuration="12.805579041s" podCreationTimestamp="2025-11-24 09:16:14 +0000 UTC" firstStartedPulling="2025-11-24 09:16:15.690382787 +0000 UTC m=+1352.021656039" lastFinishedPulling="2025-11-24 09:16:26.500318846 +0000 UTC m=+1362.831592098" observedRunningTime="2025-11-24 09:16:26.80418482 +0000 UTC m=+1363.135458092" watchObservedRunningTime="2025-11-24 09:16:26.805579041 +0000 UTC m=+1363.136852303" Nov 24 09:16:26 crc kubenswrapper[4719]: I1124 09:16:26.845485 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.845467498 podStartE2EDuration="42.845467498s" podCreationTimestamp="2025-11-24 09:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:16:26.83789567 +0000 UTC m=+1363.169168942" watchObservedRunningTime="2025-11-24 09:16:26.845467498 +0000 UTC m=+1363.176740750" Nov 24 09:16:33 crc kubenswrapper[4719]: E1124 09:16:33.677289 4719 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957bbc3c_6b1d_403a_a49d_6bafef454a48.slice/crio-cf5838a65504bea7c2c0f62a30752532cf09a5cfa9e0a03e94c2ddfb517c14fc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957bbc3c_6b1d_403a_a49d_6bafef454a48.slice\": RecentStats: unable to find data in memory cache]" Nov 24 09:16:34 crc kubenswrapper[4719]: I1124 09:16:34.088305 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 09:16:34 crc kubenswrapper[4719]: I1124 09:16:34.562090 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:16:34 crc kubenswrapper[4719]: I1124 09:16:34.562402 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:16:37 crc kubenswrapper[4719]: I1124 09:16:37.882187 4719 generic.go:334] "Generic (PLEG): container finished" podID="87710985-771e-4a43-a5d1-4933e8fc0ecf" containerID="e3a56ea6a41bc25619f8e603ea1affc41f06dc90a2b14c6687009e8de23f33f6" exitCode=0 Nov 24 09:16:37 crc kubenswrapper[4719]: I1124 09:16:37.882272 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" event={"ID":"87710985-771e-4a43-a5d1-4933e8fc0ecf","Type":"ContainerDied","Data":"e3a56ea6a41bc25619f8e603ea1affc41f06dc90a2b14c6687009e8de23f33f6"} Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.310547 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.350893 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-ssh-key\") pod \"87710985-771e-4a43-a5d1-4933e8fc0ecf\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.350947 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-repo-setup-combined-ca-bundle\") pod \"87710985-771e-4a43-a5d1-4933e8fc0ecf\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.351060 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfcnp\" (UniqueName: \"kubernetes.io/projected/87710985-771e-4a43-a5d1-4933e8fc0ecf-kube-api-access-mfcnp\") pod \"87710985-771e-4a43-a5d1-4933e8fc0ecf\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.351168 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-inventory\") pod \"87710985-771e-4a43-a5d1-4933e8fc0ecf\" (UID: \"87710985-771e-4a43-a5d1-4933e8fc0ecf\") " Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.367302 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "87710985-771e-4a43-a5d1-4933e8fc0ecf" (UID: "87710985-771e-4a43-a5d1-4933e8fc0ecf"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.367678 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87710985-771e-4a43-a5d1-4933e8fc0ecf-kube-api-access-mfcnp" (OuterVolumeSpecName: "kube-api-access-mfcnp") pod "87710985-771e-4a43-a5d1-4933e8fc0ecf" (UID: "87710985-771e-4a43-a5d1-4933e8fc0ecf"). InnerVolumeSpecName "kube-api-access-mfcnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.381433 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "87710985-771e-4a43-a5d1-4933e8fc0ecf" (UID: "87710985-771e-4a43-a5d1-4933e8fc0ecf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.410222 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-inventory" (OuterVolumeSpecName: "inventory") pod "87710985-771e-4a43-a5d1-4933e8fc0ecf" (UID: "87710985-771e-4a43-a5d1-4933e8fc0ecf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.453766 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.454026 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.454175 4719 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87710985-771e-4a43-a5d1-4933e8fc0ecf-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.454262 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfcnp\" (UniqueName: \"kubernetes.io/projected/87710985-771e-4a43-a5d1-4933e8fc0ecf-kube-api-access-mfcnp\") on node \"crc\" DevicePath \"\"" Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.909082 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" event={"ID":"87710985-771e-4a43-a5d1-4933e8fc0ecf","Type":"ContainerDied","Data":"517d882160920e9c3bfccd9d367ad8fd07fe990fb75d88c08a594fc5f5f66a4f"} Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.909379 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="517d882160920e9c3bfccd9d367ad8fd07fe990fb75d88c08a594fc5f5f66a4f" Nov 24 09:16:39 crc kubenswrapper[4719]: I1124 09:16:39.909466 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.002268 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl"] Nov 24 09:16:40 crc kubenswrapper[4719]: E1124 09:16:40.002650 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87710985-771e-4a43-a5d1-4933e8fc0ecf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.002667 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="87710985-771e-4a43-a5d1-4933e8fc0ecf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.002856 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="87710985-771e-4a43-a5d1-4933e8fc0ecf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.003474 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.018051 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.018555 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.018795 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.018865 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.022978 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl"] Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.065564 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.065613 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5x29\" (UniqueName: \"kubernetes.io/projected/e0638669-2686-4194-b1e6-794b7eabacf6-kube-api-access-x5x29\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.065672 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.065702 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.166933 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.167000 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5x29\" (UniqueName: \"kubernetes.io/projected/e0638669-2686-4194-b1e6-794b7eabacf6-kube-api-access-x5x29\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.167094 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.167133 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.172419 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.172431 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.173260 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.186924 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5x29\" (UniqueName: \"kubernetes.io/projected/e0638669-2686-4194-b1e6-794b7eabacf6-kube-api-access-x5x29\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.327553 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.831179 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl"] Nov 24 09:16:40 crc kubenswrapper[4719]: I1124 09:16:40.919836 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" event={"ID":"e0638669-2686-4194-b1e6-794b7eabacf6","Type":"ContainerStarted","Data":"811ce07af5fd06cb8a368896d35b720c41d396ccf65435a07cd4e9700187d06b"} Nov 24 09:16:41 crc kubenswrapper[4719]: I1124 09:16:41.931223 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" event={"ID":"e0638669-2686-4194-b1e6-794b7eabacf6","Type":"ContainerStarted","Data":"c24fce013632f171e8bd789580522590bd564a809fd1bd6831b7865613ed2227"} Nov 24 09:16:41 crc kubenswrapper[4719]: I1124 09:16:41.952194 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" podStartSLOduration=2.576992334 podStartE2EDuration="2.95217318s" podCreationTimestamp="2025-11-24 09:16:39 +0000 UTC" firstStartedPulling="2025-11-24 09:16:40.867092536 +0000 UTC m=+1377.198365788" lastFinishedPulling="2025-11-24 09:16:41.242273382 +0000 UTC m=+1377.573546634" observedRunningTime="2025-11-24 09:16:41.947505916 +0000 UTC m=+1378.278779178" watchObservedRunningTime="2025-11-24 09:16:41.95217318 +0000 UTC m=+1378.283446432" Nov 24 09:16:43 crc kubenswrapper[4719]: I1124 09:16:43.341236 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h69nz"] Nov 24 09:16:43 crc kubenswrapper[4719]: I1124 09:16:43.343817 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:16:43 crc kubenswrapper[4719]: I1124 09:16:43.370117 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h69nz"] Nov 24 09:16:43 crc kubenswrapper[4719]: I1124 09:16:43.434906 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d0dbb1b-45d0-4aa1-b76e-723a630b9105-catalog-content\") pod \"redhat-operators-h69nz\" (UID: \"2d0dbb1b-45d0-4aa1-b76e-723a630b9105\") " pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:16:43 crc kubenswrapper[4719]: I1124 09:16:43.435014 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d0dbb1b-45d0-4aa1-b76e-723a630b9105-utilities\") pod \"redhat-operators-h69nz\" (UID: \"2d0dbb1b-45d0-4aa1-b76e-723a630b9105\") " pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:16:43 crc kubenswrapper[4719]: I1124 09:16:43.435120 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfpkv\" (UniqueName: \"kubernetes.io/projected/2d0dbb1b-45d0-4aa1-b76e-723a630b9105-kube-api-access-tfpkv\") pod \"redhat-operators-h69nz\" (UID: \"2d0dbb1b-45d0-4aa1-b76e-723a630b9105\") " pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:16:43 crc kubenswrapper[4719]: I1124 09:16:43.536937 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfpkv\" (UniqueName: \"kubernetes.io/projected/2d0dbb1b-45d0-4aa1-b76e-723a630b9105-kube-api-access-tfpkv\") pod \"redhat-operators-h69nz\" (UID: \"2d0dbb1b-45d0-4aa1-b76e-723a630b9105\") " pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:16:43 crc kubenswrapper[4719]: I1124 09:16:43.537118 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d0dbb1b-45d0-4aa1-b76e-723a630b9105-catalog-content\") pod \"redhat-operators-h69nz\" (UID: \"2d0dbb1b-45d0-4aa1-b76e-723a630b9105\") " pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:16:43 crc kubenswrapper[4719]: I1124 09:16:43.537295 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d0dbb1b-45d0-4aa1-b76e-723a630b9105-utilities\") pod \"redhat-operators-h69nz\" (UID: \"2d0dbb1b-45d0-4aa1-b76e-723a630b9105\") " pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:16:43 crc kubenswrapper[4719]: I1124 09:16:43.537928 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d0dbb1b-45d0-4aa1-b76e-723a630b9105-catalog-content\") pod \"redhat-operators-h69nz\" (UID: \"2d0dbb1b-45d0-4aa1-b76e-723a630b9105\") " pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:16:43 crc kubenswrapper[4719]: I1124 09:16:43.537948 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d0dbb1b-45d0-4aa1-b76e-723a630b9105-utilities\") pod \"redhat-operators-h69nz\" (UID: \"2d0dbb1b-45d0-4aa1-b76e-723a630b9105\") " pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:16:43 crc kubenswrapper[4719]: I1124 09:16:43.560904 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfpkv\" (UniqueName: \"kubernetes.io/projected/2d0dbb1b-45d0-4aa1-b76e-723a630b9105-kube-api-access-tfpkv\") pod \"redhat-operators-h69nz\" (UID: \"2d0dbb1b-45d0-4aa1-b76e-723a630b9105\") " pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:16:43 crc kubenswrapper[4719]: I1124 09:16:43.671936 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:16:43 crc kubenswrapper[4719]: E1124 09:16:43.977835 4719 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957bbc3c_6b1d_403a_a49d_6bafef454a48.slice/crio-cf5838a65504bea7c2c0f62a30752532cf09a5cfa9e0a03e94c2ddfb517c14fc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957bbc3c_6b1d_403a_a49d_6bafef454a48.slice\": RecentStats: unable to find data in memory cache]" Nov 24 09:16:44 crc kubenswrapper[4719]: W1124 09:16:44.166813 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d0dbb1b_45d0_4aa1_b76e_723a630b9105.slice/crio-48f24a78b26c464144c397a8e1b991c30dcabeeee8f5524666ad34bc83902274 WatchSource:0}: Error finding container 48f24a78b26c464144c397a8e1b991c30dcabeeee8f5524666ad34bc83902274: Status 404 returned error can't find the container with id 48f24a78b26c464144c397a8e1b991c30dcabeeee8f5524666ad34bc83902274 Nov 24 09:16:44 crc kubenswrapper[4719]: I1124 09:16:44.168511 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h69nz"] Nov 24 09:16:44 crc kubenswrapper[4719]: I1124 09:16:44.962992 4719 generic.go:334] "Generic (PLEG): container finished" podID="2d0dbb1b-45d0-4aa1-b76e-723a630b9105" containerID="67d0aa0537fdfc470372db14f4d52f4c32fc4a7ca42ff9535a4453872641159c" exitCode=0 Nov 24 09:16:44 crc kubenswrapper[4719]: I1124 09:16:44.963634 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h69nz" event={"ID":"2d0dbb1b-45d0-4aa1-b76e-723a630b9105","Type":"ContainerDied","Data":"67d0aa0537fdfc470372db14f4d52f4c32fc4a7ca42ff9535a4453872641159c"} Nov 24 09:16:44 crc kubenswrapper[4719]: I1124 09:16:44.963663 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h69nz" event={"ID":"2d0dbb1b-45d0-4aa1-b76e-723a630b9105","Type":"ContainerStarted","Data":"48f24a78b26c464144c397a8e1b991c30dcabeeee8f5524666ad34bc83902274"} Nov 24 09:16:45 crc kubenswrapper[4719]: I1124 09:16:45.026232 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 09:16:55 crc kubenswrapper[4719]: I1124 09:16:55.073986 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h69nz" event={"ID":"2d0dbb1b-45d0-4aa1-b76e-723a630b9105","Type":"ContainerStarted","Data":"3577222d7ac9b654e5601e67f9fa87f4d6c5e3d60cb9825d3bba3678167f7048"} Nov 24 09:16:57 crc kubenswrapper[4719]: I1124 09:16:57.093671 4719 generic.go:334] "Generic (PLEG): container finished" podID="2d0dbb1b-45d0-4aa1-b76e-723a630b9105" containerID="3577222d7ac9b654e5601e67f9fa87f4d6c5e3d60cb9825d3bba3678167f7048" exitCode=0 Nov 24 09:16:57 crc kubenswrapper[4719]: I1124 09:16:57.093760 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h69nz" event={"ID":"2d0dbb1b-45d0-4aa1-b76e-723a630b9105","Type":"ContainerDied","Data":"3577222d7ac9b654e5601e67f9fa87f4d6c5e3d60cb9825d3bba3678167f7048"} Nov 24 09:16:59 crc kubenswrapper[4719]: I1124 09:16:59.125004 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h69nz" event={"ID":"2d0dbb1b-45d0-4aa1-b76e-723a630b9105","Type":"ContainerStarted","Data":"d9612270aab66f673f8f82b3c1f99ce2bbf19d00fe89573f0bf3c0a1b01e8bdb"} Nov 24 09:16:59 crc kubenswrapper[4719]: I1124 09:16:59.146767 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h69nz" podStartSLOduration=2.61779759 podStartE2EDuration="16.146744s" podCreationTimestamp="2025-11-24 09:16:43 +0000 UTC" firstStartedPulling="2025-11-24 09:16:44.964577623 +0000 UTC m=+1381.295850875" lastFinishedPulling="2025-11-24 09:16:58.493524033 +0000 UTC m=+1394.824797285" observedRunningTime="2025-11-24 09:16:59.144394862 +0000 UTC m=+1395.475668144" watchObservedRunningTime="2025-11-24 09:16:59.146744 +0000 UTC m=+1395.478017252" Nov 24 09:17:03 crc kubenswrapper[4719]: I1124 09:17:03.672194 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:17:03 crc kubenswrapper[4719]: I1124 09:17:03.672588 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:17:04 crc kubenswrapper[4719]: I1124 09:17:04.561934 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:17:04 crc kubenswrapper[4719]: I1124 09:17:04.562257 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:17:04 crc kubenswrapper[4719]: I1124 09:17:04.562304 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 09:17:04 crc kubenswrapper[4719]: I1124 09:17:04.563015 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b04d639d9aa1ad87769535c446009de2717540d226e0b11055a32fbdd9893eb6"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:17:04 crc kubenswrapper[4719]: I1124 09:17:04.563096 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://b04d639d9aa1ad87769535c446009de2717540d226e0b11055a32fbdd9893eb6" gracePeriod=600 Nov 24 09:17:04 crc kubenswrapper[4719]: I1124 09:17:04.726200 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h69nz" podUID="2d0dbb1b-45d0-4aa1-b76e-723a630b9105" containerName="registry-server" probeResult="failure" output=< Nov 24 09:17:04 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:17:04 crc kubenswrapper[4719]: > Nov 24 09:17:05 crc kubenswrapper[4719]: I1124 09:17:05.176247 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="b04d639d9aa1ad87769535c446009de2717540d226e0b11055a32fbdd9893eb6" exitCode=0 Nov 24 09:17:05 crc kubenswrapper[4719]: I1124 09:17:05.176446 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"b04d639d9aa1ad87769535c446009de2717540d226e0b11055a32fbdd9893eb6"} Nov 24 09:17:05 crc kubenswrapper[4719]: I1124 09:17:05.176565 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b"} Nov 24 09:17:05 crc kubenswrapper[4719]: I1124 09:17:05.176588 4719 scope.go:117] "RemoveContainer" containerID="abd7ce8489d65ccef4f15a6a456d72d66be28ce94d53032a08cda3487cfa7499" Nov 24 09:17:08 crc kubenswrapper[4719]: I1124 09:17:08.084181 4719 scope.go:117] "RemoveContainer" containerID="8868003a1fe41de35e9e1da9657efd5cab96f315287562ff679bd74ca1e575b0" Nov 24 09:17:08 crc kubenswrapper[4719]: I1124 09:17:08.110627 4719 scope.go:117] "RemoveContainer" containerID="898dcf6011ae7ed2019f157bb57e4f2cdd36e59aa4b69e0daa5c20223a54c457" Nov 24 09:17:14 crc kubenswrapper[4719]: I1124 09:17:14.713859 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h69nz" podUID="2d0dbb1b-45d0-4aa1-b76e-723a630b9105" containerName="registry-server" probeResult="failure" output=< Nov 24 09:17:14 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:17:14 crc kubenswrapper[4719]: > Nov 24 09:17:23 crc kubenswrapper[4719]: I1124 09:17:23.722041 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:17:23 crc kubenswrapper[4719]: I1124 09:17:23.775135 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h69nz" Nov 24 09:17:23 crc kubenswrapper[4719]: I1124 09:17:23.843920 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h69nz"] Nov 24 09:17:23 crc kubenswrapper[4719]: I1124 09:17:23.960538 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zgtch"] Nov 24 09:17:23 crc kubenswrapper[4719]: I1124 09:17:23.960790 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zgtch" podUID="cbda51de-65a7-4a82-b61a-05ad0766c72d" containerName="registry-server" containerID="cri-o://0f2061bd736e4d0d2a510c21309d8b2e532b966789dc97094a33a9dd294cf0cd" gracePeriod=2 Nov 24 09:17:24 crc kubenswrapper[4719]: I1124 09:17:24.355290 4719 generic.go:334] "Generic (PLEG): container finished" podID="cbda51de-65a7-4a82-b61a-05ad0766c72d" containerID="0f2061bd736e4d0d2a510c21309d8b2e532b966789dc97094a33a9dd294cf0cd" exitCode=0 Nov 24 09:17:24 crc kubenswrapper[4719]: I1124 09:17:24.355448 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zgtch" event={"ID":"cbda51de-65a7-4a82-b61a-05ad0766c72d","Type":"ContainerDied","Data":"0f2061bd736e4d0d2a510c21309d8b2e532b966789dc97094a33a9dd294cf0cd"} Nov 24 09:17:24 crc kubenswrapper[4719]: I1124 09:17:24.438147 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 09:17:24 crc kubenswrapper[4719]: I1124 09:17:24.539946 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbda51de-65a7-4a82-b61a-05ad0766c72d-utilities\") pod \"cbda51de-65a7-4a82-b61a-05ad0766c72d\" (UID: \"cbda51de-65a7-4a82-b61a-05ad0766c72d\") " Nov 24 09:17:24 crc kubenswrapper[4719]: I1124 09:17:24.540032 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d6lc\" (UniqueName: \"kubernetes.io/projected/cbda51de-65a7-4a82-b61a-05ad0766c72d-kube-api-access-7d6lc\") pod \"cbda51de-65a7-4a82-b61a-05ad0766c72d\" (UID: \"cbda51de-65a7-4a82-b61a-05ad0766c72d\") " Nov 24 09:17:24 crc kubenswrapper[4719]: I1124 09:17:24.540131 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbda51de-65a7-4a82-b61a-05ad0766c72d-catalog-content\") pod \"cbda51de-65a7-4a82-b61a-05ad0766c72d\" (UID: \"cbda51de-65a7-4a82-b61a-05ad0766c72d\") " Nov 24 09:17:24 crc kubenswrapper[4719]: I1124 09:17:24.540896 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbda51de-65a7-4a82-b61a-05ad0766c72d-utilities" (OuterVolumeSpecName: "utilities") pod "cbda51de-65a7-4a82-b61a-05ad0766c72d" (UID: "cbda51de-65a7-4a82-b61a-05ad0766c72d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:17:24 crc kubenswrapper[4719]: I1124 09:17:24.542946 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbda51de-65a7-4a82-b61a-05ad0766c72d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:17:24 crc kubenswrapper[4719]: I1124 09:17:24.558343 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbda51de-65a7-4a82-b61a-05ad0766c72d-kube-api-access-7d6lc" (OuterVolumeSpecName: "kube-api-access-7d6lc") pod "cbda51de-65a7-4a82-b61a-05ad0766c72d" (UID: "cbda51de-65a7-4a82-b61a-05ad0766c72d"). InnerVolumeSpecName "kube-api-access-7d6lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:17:24 crc kubenswrapper[4719]: I1124 09:17:24.607359 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbda51de-65a7-4a82-b61a-05ad0766c72d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbda51de-65a7-4a82-b61a-05ad0766c72d" (UID: "cbda51de-65a7-4a82-b61a-05ad0766c72d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:17:24 crc kubenswrapper[4719]: I1124 09:17:24.644755 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d6lc\" (UniqueName: \"kubernetes.io/projected/cbda51de-65a7-4a82-b61a-05ad0766c72d-kube-api-access-7d6lc\") on node \"crc\" DevicePath \"\"" Nov 24 09:17:24 crc kubenswrapper[4719]: I1124 09:17:24.644802 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbda51de-65a7-4a82-b61a-05ad0766c72d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:17:25 crc kubenswrapper[4719]: I1124 09:17:25.369949 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zgtch" event={"ID":"cbda51de-65a7-4a82-b61a-05ad0766c72d","Type":"ContainerDied","Data":"f6f50ea7a5d12ad566d87bb20cb81793afee8f4c16d804cf9adb2586a8ba45b2"} Nov 24 09:17:25 crc kubenswrapper[4719]: I1124 09:17:25.371513 4719 scope.go:117] "RemoveContainer" containerID="0f2061bd736e4d0d2a510c21309d8b2e532b966789dc97094a33a9dd294cf0cd" Nov 24 09:17:25 crc kubenswrapper[4719]: I1124 09:17:25.370757 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zgtch" Nov 24 09:17:25 crc kubenswrapper[4719]: I1124 09:17:25.394497 4719 scope.go:117] "RemoveContainer" containerID="0e49187facffc9f97938d78f3a1e5cd1b6bb3757f45aae3e4381cc449710a401" Nov 24 09:17:25 crc kubenswrapper[4719]: I1124 09:17:25.420877 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zgtch"] Nov 24 09:17:25 crc kubenswrapper[4719]: I1124 09:17:25.447555 4719 scope.go:117] "RemoveContainer" containerID="d1ffada7af2b79e77afb76a81abe92c558a7a6fd6c6165d747245763d5893435" Nov 24 09:17:25 crc kubenswrapper[4719]: I1124 09:17:25.452286 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zgtch"] Nov 24 09:17:26 crc kubenswrapper[4719]: I1124 09:17:26.534246 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbda51de-65a7-4a82-b61a-05ad0766c72d" path="/var/lib/kubelet/pods/cbda51de-65a7-4a82-b61a-05ad0766c72d/volumes" Nov 24 09:18:08 crc kubenswrapper[4719]: I1124 09:18:08.243170 4719 scope.go:117] "RemoveContainer" containerID="287380c3ec074c5c596ec45de04102841a68316200401bae503db9b7e831f9d9" Nov 24 09:18:39 crc kubenswrapper[4719]: I1124 09:18:39.859519 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vxqgr"] Nov 24 09:18:39 crc kubenswrapper[4719]: E1124 09:18:39.863863 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbda51de-65a7-4a82-b61a-05ad0766c72d" containerName="registry-server" Nov 24 09:18:39 crc kubenswrapper[4719]: I1124 09:18:39.863879 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbda51de-65a7-4a82-b61a-05ad0766c72d" containerName="registry-server" Nov 24 09:18:39 crc kubenswrapper[4719]: E1124 09:18:39.863896 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbda51de-65a7-4a82-b61a-05ad0766c72d" containerName="extract-utilities" Nov 24 09:18:39 crc kubenswrapper[4719]: I1124 09:18:39.863902 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbda51de-65a7-4a82-b61a-05ad0766c72d" containerName="extract-utilities" Nov 24 09:18:39 crc kubenswrapper[4719]: E1124 09:18:39.863923 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbda51de-65a7-4a82-b61a-05ad0766c72d" containerName="extract-content" Nov 24 09:18:39 crc kubenswrapper[4719]: I1124 09:18:39.863929 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbda51de-65a7-4a82-b61a-05ad0766c72d" containerName="extract-content" Nov 24 09:18:39 crc kubenswrapper[4719]: I1124 09:18:39.864257 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbda51de-65a7-4a82-b61a-05ad0766c72d" containerName="registry-server" Nov 24 09:18:39 crc kubenswrapper[4719]: I1124 09:18:39.865497 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:39 crc kubenswrapper[4719]: I1124 09:18:39.899963 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vxqgr"] Nov 24 09:18:39 crc kubenswrapper[4719]: I1124 09:18:39.970894 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz557\" (UniqueName: \"kubernetes.io/projected/c0cf0c3a-98c7-4448-a705-50bac6f265c5-kube-api-access-kz557\") pod \"community-operators-vxqgr\" (UID: \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\") " pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:39 crc kubenswrapper[4719]: I1124 09:18:39.971350 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0cf0c3a-98c7-4448-a705-50bac6f265c5-catalog-content\") pod \"community-operators-vxqgr\" (UID: \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\") " pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:39 crc kubenswrapper[4719]: I1124 09:18:39.971557 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0cf0c3a-98c7-4448-a705-50bac6f265c5-utilities\") pod \"community-operators-vxqgr\" (UID: \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\") " pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:40 crc kubenswrapper[4719]: I1124 09:18:40.073653 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0cf0c3a-98c7-4448-a705-50bac6f265c5-catalog-content\") pod \"community-operators-vxqgr\" (UID: \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\") " pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:40 crc kubenswrapper[4719]: I1124 09:18:40.073753 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0cf0c3a-98c7-4448-a705-50bac6f265c5-utilities\") pod \"community-operators-vxqgr\" (UID: \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\") " pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:40 crc kubenswrapper[4719]: I1124 09:18:40.073792 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz557\" (UniqueName: \"kubernetes.io/projected/c0cf0c3a-98c7-4448-a705-50bac6f265c5-kube-api-access-kz557\") pod \"community-operators-vxqgr\" (UID: \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\") " pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:40 crc kubenswrapper[4719]: I1124 09:18:40.074230 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0cf0c3a-98c7-4448-a705-50bac6f265c5-catalog-content\") pod \"community-operators-vxqgr\" (UID: \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\") " pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:40 crc kubenswrapper[4719]: I1124 09:18:40.074285 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0cf0c3a-98c7-4448-a705-50bac6f265c5-utilities\") pod \"community-operators-vxqgr\" (UID: \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\") " pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:40 crc kubenswrapper[4719]: I1124 09:18:40.094578 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz557\" (UniqueName: \"kubernetes.io/projected/c0cf0c3a-98c7-4448-a705-50bac6f265c5-kube-api-access-kz557\") pod \"community-operators-vxqgr\" (UID: \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\") " pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:40 crc kubenswrapper[4719]: I1124 09:18:40.199246 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:40 crc kubenswrapper[4719]: I1124 09:18:40.739570 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vxqgr"] Nov 24 09:18:40 crc kubenswrapper[4719]: I1124 09:18:40.804228 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxqgr" event={"ID":"c0cf0c3a-98c7-4448-a705-50bac6f265c5","Type":"ContainerStarted","Data":"2e445254c50a8a42bc93f10ad4bedc1ec1567c532365f7c81dc7167441f06074"} Nov 24 09:18:41 crc kubenswrapper[4719]: I1124 09:18:41.821279 4719 generic.go:334] "Generic (PLEG): container finished" podID="c0cf0c3a-98c7-4448-a705-50bac6f265c5" containerID="91d60053e1bb259c14395963ddf6a9a03b399bbe8659275b832567c85ab9b27c" exitCode=0 Nov 24 09:18:41 crc kubenswrapper[4719]: I1124 09:18:41.822001 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxqgr" event={"ID":"c0cf0c3a-98c7-4448-a705-50bac6f265c5","Type":"ContainerDied","Data":"91d60053e1bb259c14395963ddf6a9a03b399bbe8659275b832567c85ab9b27c"} Nov 24 09:18:42 crc kubenswrapper[4719]: I1124 09:18:42.834854 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxqgr" event={"ID":"c0cf0c3a-98c7-4448-a705-50bac6f265c5","Type":"ContainerStarted","Data":"707dfd23ebbc684131bc37d2cc8e79e1df6e4b2aa27cc829aa99f638ab5cca40"} Nov 24 09:18:44 crc kubenswrapper[4719]: I1124 09:18:44.856880 4719 generic.go:334] "Generic (PLEG): container finished" podID="c0cf0c3a-98c7-4448-a705-50bac6f265c5" containerID="707dfd23ebbc684131bc37d2cc8e79e1df6e4b2aa27cc829aa99f638ab5cca40" exitCode=0 Nov 24 09:18:44 crc kubenswrapper[4719]: I1124 09:18:44.856956 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxqgr" event={"ID":"c0cf0c3a-98c7-4448-a705-50bac6f265c5","Type":"ContainerDied","Data":"707dfd23ebbc684131bc37d2cc8e79e1df6e4b2aa27cc829aa99f638ab5cca40"} Nov 24 09:18:45 crc kubenswrapper[4719]: I1124 09:18:45.872736 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxqgr" event={"ID":"c0cf0c3a-98c7-4448-a705-50bac6f265c5","Type":"ContainerStarted","Data":"93daf9a09f768d9bf4ce892299fbe7c75d0ece59da02e3112bf13fbe7a704583"} Nov 24 09:18:45 crc kubenswrapper[4719]: I1124 09:18:45.897867 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vxqgr" podStartSLOduration=3.427988525 podStartE2EDuration="6.897845898s" podCreationTimestamp="2025-11-24 09:18:39 +0000 UTC" firstStartedPulling="2025-11-24 09:18:41.824572835 +0000 UTC m=+1498.155846087" lastFinishedPulling="2025-11-24 09:18:45.294430208 +0000 UTC m=+1501.625703460" observedRunningTime="2025-11-24 09:18:45.891531565 +0000 UTC m=+1502.222804827" watchObservedRunningTime="2025-11-24 09:18:45.897845898 +0000 UTC m=+1502.229119140" Nov 24 09:18:50 crc kubenswrapper[4719]: I1124 09:18:50.199651 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:50 crc kubenswrapper[4719]: I1124 09:18:50.200071 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:50 crc kubenswrapper[4719]: I1124 09:18:50.248994 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:50 crc kubenswrapper[4719]: I1124 09:18:50.965761 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:51 crc kubenswrapper[4719]: I1124 09:18:51.012301 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vxqgr"] Nov 24 09:18:52 crc kubenswrapper[4719]: I1124 09:18:52.942207 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vxqgr" podUID="c0cf0c3a-98c7-4448-a705-50bac6f265c5" containerName="registry-server" containerID="cri-o://93daf9a09f768d9bf4ce892299fbe7c75d0ece59da02e3112bf13fbe7a704583" gracePeriod=2 Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.416894 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mqzqc"] Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.427387 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.451671 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mqzqc"] Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.459796 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.510629 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-utilities\") pod \"certified-operators-mqzqc\" (UID: \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\") " pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.510782 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-catalog-content\") pod \"certified-operators-mqzqc\" (UID: \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\") " pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.510842 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cxvk\" (UniqueName: \"kubernetes.io/projected/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-kube-api-access-9cxvk\") pod \"certified-operators-mqzqc\" (UID: \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\") " pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.611587 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0cf0c3a-98c7-4448-a705-50bac6f265c5-utilities\") pod \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\" (UID: \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\") " Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.611663 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0cf0c3a-98c7-4448-a705-50bac6f265c5-catalog-content\") pod \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\" (UID: \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\") " Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.611771 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz557\" (UniqueName: \"kubernetes.io/projected/c0cf0c3a-98c7-4448-a705-50bac6f265c5-kube-api-access-kz557\") pod \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\" (UID: \"c0cf0c3a-98c7-4448-a705-50bac6f265c5\") " Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.612215 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-catalog-content\") pod \"certified-operators-mqzqc\" (UID: \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\") " pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.612289 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cxvk\" (UniqueName: \"kubernetes.io/projected/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-kube-api-access-9cxvk\") pod \"certified-operators-mqzqc\" (UID: \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\") " pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.612346 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-utilities\") pod \"certified-operators-mqzqc\" (UID: \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\") " pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.612936 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-utilities\") pod \"certified-operators-mqzqc\" (UID: \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\") " pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.613702 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0cf0c3a-98c7-4448-a705-50bac6f265c5-utilities" (OuterVolumeSpecName: "utilities") pod "c0cf0c3a-98c7-4448-a705-50bac6f265c5" (UID: "c0cf0c3a-98c7-4448-a705-50bac6f265c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.613869 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-catalog-content\") pod \"certified-operators-mqzqc\" (UID: \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\") " pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.629996 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0cf0c3a-98c7-4448-a705-50bac6f265c5-kube-api-access-kz557" (OuterVolumeSpecName: "kube-api-access-kz557") pod "c0cf0c3a-98c7-4448-a705-50bac6f265c5" (UID: "c0cf0c3a-98c7-4448-a705-50bac6f265c5"). InnerVolumeSpecName "kube-api-access-kz557". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.635009 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cxvk\" (UniqueName: \"kubernetes.io/projected/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-kube-api-access-9cxvk\") pod \"certified-operators-mqzqc\" (UID: \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\") " pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.663867 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0cf0c3a-98c7-4448-a705-50bac6f265c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0cf0c3a-98c7-4448-a705-50bac6f265c5" (UID: "c0cf0c3a-98c7-4448-a705-50bac6f265c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.714193 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0cf0c3a-98c7-4448-a705-50bac6f265c5-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.714233 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0cf0c3a-98c7-4448-a705-50bac6f265c5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.714247 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz557\" (UniqueName: \"kubernetes.io/projected/c0cf0c3a-98c7-4448-a705-50bac6f265c5-kube-api-access-kz557\") on node \"crc\" DevicePath \"\"" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.777408 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.964106 4719 generic.go:334] "Generic (PLEG): container finished" podID="c0cf0c3a-98c7-4448-a705-50bac6f265c5" containerID="93daf9a09f768d9bf4ce892299fbe7c75d0ece59da02e3112bf13fbe7a704583" exitCode=0 Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.964147 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxqgr" event={"ID":"c0cf0c3a-98c7-4448-a705-50bac6f265c5","Type":"ContainerDied","Data":"93daf9a09f768d9bf4ce892299fbe7c75d0ece59da02e3112bf13fbe7a704583"} Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.964173 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vxqgr" event={"ID":"c0cf0c3a-98c7-4448-a705-50bac6f265c5","Type":"ContainerDied","Data":"2e445254c50a8a42bc93f10ad4bedc1ec1567c532365f7c81dc7167441f06074"} Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.964191 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vxqgr" Nov 24 09:18:53 crc kubenswrapper[4719]: I1124 09:18:53.964216 4719 scope.go:117] "RemoveContainer" containerID="93daf9a09f768d9bf4ce892299fbe7c75d0ece59da02e3112bf13fbe7a704583" Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.035933 4719 scope.go:117] "RemoveContainer" containerID="707dfd23ebbc684131bc37d2cc8e79e1df6e4b2aa27cc829aa99f638ab5cca40" Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.055165 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vxqgr"] Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.069898 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vxqgr"] Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.155203 4719 scope.go:117] "RemoveContainer" containerID="91d60053e1bb259c14395963ddf6a9a03b399bbe8659275b832567c85ab9b27c" Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.201348 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mqzqc"] Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.229765 4719 scope.go:117] "RemoveContainer" containerID="93daf9a09f768d9bf4ce892299fbe7c75d0ece59da02e3112bf13fbe7a704583" Nov 24 09:18:54 crc kubenswrapper[4719]: E1124 09:18:54.230300 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93daf9a09f768d9bf4ce892299fbe7c75d0ece59da02e3112bf13fbe7a704583\": container with ID starting with 93daf9a09f768d9bf4ce892299fbe7c75d0ece59da02e3112bf13fbe7a704583 not found: ID does not exist" containerID="93daf9a09f768d9bf4ce892299fbe7c75d0ece59da02e3112bf13fbe7a704583" Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.230330 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93daf9a09f768d9bf4ce892299fbe7c75d0ece59da02e3112bf13fbe7a704583"} err="failed to get container status \"93daf9a09f768d9bf4ce892299fbe7c75d0ece59da02e3112bf13fbe7a704583\": rpc error: code = NotFound desc = could not find container \"93daf9a09f768d9bf4ce892299fbe7c75d0ece59da02e3112bf13fbe7a704583\": container with ID starting with 93daf9a09f768d9bf4ce892299fbe7c75d0ece59da02e3112bf13fbe7a704583 not found: ID does not exist" Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.230352 4719 scope.go:117] "RemoveContainer" containerID="707dfd23ebbc684131bc37d2cc8e79e1df6e4b2aa27cc829aa99f638ab5cca40" Nov 24 09:18:54 crc kubenswrapper[4719]: E1124 09:18:54.230688 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"707dfd23ebbc684131bc37d2cc8e79e1df6e4b2aa27cc829aa99f638ab5cca40\": container with ID starting with 707dfd23ebbc684131bc37d2cc8e79e1df6e4b2aa27cc829aa99f638ab5cca40 not found: ID does not exist" containerID="707dfd23ebbc684131bc37d2cc8e79e1df6e4b2aa27cc829aa99f638ab5cca40" Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.230709 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"707dfd23ebbc684131bc37d2cc8e79e1df6e4b2aa27cc829aa99f638ab5cca40"} err="failed to get container status \"707dfd23ebbc684131bc37d2cc8e79e1df6e4b2aa27cc829aa99f638ab5cca40\": rpc error: code = NotFound desc = could not find container \"707dfd23ebbc684131bc37d2cc8e79e1df6e4b2aa27cc829aa99f638ab5cca40\": container with ID starting with 707dfd23ebbc684131bc37d2cc8e79e1df6e4b2aa27cc829aa99f638ab5cca40 not found: ID does not exist" Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.230722 4719 scope.go:117] "RemoveContainer" containerID="91d60053e1bb259c14395963ddf6a9a03b399bbe8659275b832567c85ab9b27c" Nov 24 09:18:54 crc kubenswrapper[4719]: E1124 09:18:54.231018 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91d60053e1bb259c14395963ddf6a9a03b399bbe8659275b832567c85ab9b27c\": container with ID starting with 91d60053e1bb259c14395963ddf6a9a03b399bbe8659275b832567c85ab9b27c not found: ID does not exist" containerID="91d60053e1bb259c14395963ddf6a9a03b399bbe8659275b832567c85ab9b27c" Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.231207 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91d60053e1bb259c14395963ddf6a9a03b399bbe8659275b832567c85ab9b27c"} err="failed to get container status \"91d60053e1bb259c14395963ddf6a9a03b399bbe8659275b832567c85ab9b27c\": rpc error: code = NotFound desc = could not find container \"91d60053e1bb259c14395963ddf6a9a03b399bbe8659275b832567c85ab9b27c\": container with ID starting with 91d60053e1bb259c14395963ddf6a9a03b399bbe8659275b832567c85ab9b27c not found: ID does not exist" Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.531683 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0cf0c3a-98c7-4448-a705-50bac6f265c5" path="/var/lib/kubelet/pods/c0cf0c3a-98c7-4448-a705-50bac6f265c5/volumes" Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.979505 4719 generic.go:334] "Generic (PLEG): container finished" podID="5317b0f9-ae2a-4514-b11a-f54d75bf09aa" containerID="eac400c15fe2ccb24f65c9ce15555c0b59a5d0f482c5f8761c08689befa8b5e7" exitCode=0 Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.979613 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqzqc" event={"ID":"5317b0f9-ae2a-4514-b11a-f54d75bf09aa","Type":"ContainerDied","Data":"eac400c15fe2ccb24f65c9ce15555c0b59a5d0f482c5f8761c08689befa8b5e7"} Nov 24 09:18:54 crc kubenswrapper[4719]: I1124 09:18:54.979836 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqzqc" event={"ID":"5317b0f9-ae2a-4514-b11a-f54d75bf09aa","Type":"ContainerStarted","Data":"cd5e78e77a89f0c405381b6cd9724ea53662fd34cc41af109d4e014e7db1bb18"} Nov 24 09:18:55 crc kubenswrapper[4719]: I1124 09:18:55.995502 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqzqc" event={"ID":"5317b0f9-ae2a-4514-b11a-f54d75bf09aa","Type":"ContainerStarted","Data":"5d8ae08061d4be02bf9535252eac6f13cd7efe6e37d66bfdaed8c2d3e5361472"} Nov 24 09:18:58 crc kubenswrapper[4719]: I1124 09:18:58.103895 4719 generic.go:334] "Generic (PLEG): container finished" podID="5317b0f9-ae2a-4514-b11a-f54d75bf09aa" containerID="5d8ae08061d4be02bf9535252eac6f13cd7efe6e37d66bfdaed8c2d3e5361472" exitCode=0 Nov 24 09:18:58 crc kubenswrapper[4719]: I1124 09:18:58.103973 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqzqc" event={"ID":"5317b0f9-ae2a-4514-b11a-f54d75bf09aa","Type":"ContainerDied","Data":"5d8ae08061d4be02bf9535252eac6f13cd7efe6e37d66bfdaed8c2d3e5361472"} Nov 24 09:18:59 crc kubenswrapper[4719]: I1124 09:18:59.115984 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqzqc" event={"ID":"5317b0f9-ae2a-4514-b11a-f54d75bf09aa","Type":"ContainerStarted","Data":"ea2e0aedb6e460d6913c049cc34b928b2842e09f0e0da5d28964fe327eedd685"} Nov 24 09:18:59 crc kubenswrapper[4719]: I1124 09:18:59.137776 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mqzqc" podStartSLOduration=2.595867277 podStartE2EDuration="6.13775433s" podCreationTimestamp="2025-11-24 09:18:53 +0000 UTC" firstStartedPulling="2025-11-24 09:18:54.98220598 +0000 UTC m=+1511.313479242" lastFinishedPulling="2025-11-24 09:18:58.524093033 +0000 UTC m=+1514.855366295" observedRunningTime="2025-11-24 09:18:59.134604569 +0000 UTC m=+1515.465877831" watchObservedRunningTime="2025-11-24 09:18:59.13775433 +0000 UTC m=+1515.469027582" Nov 24 09:19:03 crc kubenswrapper[4719]: I1124 09:19:03.777646 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:19:03 crc kubenswrapper[4719]: I1124 09:19:03.777933 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:19:03 crc kubenswrapper[4719]: I1124 09:19:03.830428 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:19:04 crc kubenswrapper[4719]: I1124 09:19:04.204754 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:19:04 crc kubenswrapper[4719]: I1124 09:19:04.251498 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mqzqc"] Nov 24 09:19:06 crc kubenswrapper[4719]: I1124 09:19:06.171754 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mqzqc" podUID="5317b0f9-ae2a-4514-b11a-f54d75bf09aa" containerName="registry-server" containerID="cri-o://ea2e0aedb6e460d6913c049cc34b928b2842e09f0e0da5d28964fe327eedd685" gracePeriod=2 Nov 24 09:19:06 crc kubenswrapper[4719]: I1124 09:19:06.627477 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:19:06 crc kubenswrapper[4719]: I1124 09:19:06.653708 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-catalog-content\") pod \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\" (UID: \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\") " Nov 24 09:19:06 crc kubenswrapper[4719]: I1124 09:19:06.653757 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-utilities\") pod \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\" (UID: \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\") " Nov 24 09:19:06 crc kubenswrapper[4719]: I1124 09:19:06.653808 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cxvk\" (UniqueName: \"kubernetes.io/projected/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-kube-api-access-9cxvk\") pod \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\" (UID: \"5317b0f9-ae2a-4514-b11a-f54d75bf09aa\") " Nov 24 09:19:06 crc kubenswrapper[4719]: I1124 09:19:06.655480 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-utilities" (OuterVolumeSpecName: "utilities") pod "5317b0f9-ae2a-4514-b11a-f54d75bf09aa" (UID: "5317b0f9-ae2a-4514-b11a-f54d75bf09aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:19:06 crc kubenswrapper[4719]: I1124 09:19:06.667420 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-kube-api-access-9cxvk" (OuterVolumeSpecName: "kube-api-access-9cxvk") pod "5317b0f9-ae2a-4514-b11a-f54d75bf09aa" (UID: "5317b0f9-ae2a-4514-b11a-f54d75bf09aa"). InnerVolumeSpecName "kube-api-access-9cxvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:19:06 crc kubenswrapper[4719]: I1124 09:19:06.716209 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5317b0f9-ae2a-4514-b11a-f54d75bf09aa" (UID: "5317b0f9-ae2a-4514-b11a-f54d75bf09aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:19:06 crc kubenswrapper[4719]: I1124 09:19:06.758216 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:19:06 crc kubenswrapper[4719]: I1124 09:19:06.758262 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:19:06 crc kubenswrapper[4719]: I1124 09:19:06.758276 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cxvk\" (UniqueName: \"kubernetes.io/projected/5317b0f9-ae2a-4514-b11a-f54d75bf09aa-kube-api-access-9cxvk\") on node \"crc\" DevicePath \"\"" Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.182734 4719 generic.go:334] "Generic (PLEG): container finished" podID="5317b0f9-ae2a-4514-b11a-f54d75bf09aa" containerID="ea2e0aedb6e460d6913c049cc34b928b2842e09f0e0da5d28964fe327eedd685" exitCode=0 Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.182798 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqzqc" event={"ID":"5317b0f9-ae2a-4514-b11a-f54d75bf09aa","Type":"ContainerDied","Data":"ea2e0aedb6e460d6913c049cc34b928b2842e09f0e0da5d28964fe327eedd685"} Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.182819 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mqzqc" Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.182843 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqzqc" event={"ID":"5317b0f9-ae2a-4514-b11a-f54d75bf09aa","Type":"ContainerDied","Data":"cd5e78e77a89f0c405381b6cd9724ea53662fd34cc41af109d4e014e7db1bb18"} Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.182868 4719 scope.go:117] "RemoveContainer" containerID="ea2e0aedb6e460d6913c049cc34b928b2842e09f0e0da5d28964fe327eedd685" Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.215552 4719 scope.go:117] "RemoveContainer" containerID="5d8ae08061d4be02bf9535252eac6f13cd7efe6e37d66bfdaed8c2d3e5361472" Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.222295 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mqzqc"] Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.233630 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mqzqc"] Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.237465 4719 scope.go:117] "RemoveContainer" containerID="eac400c15fe2ccb24f65c9ce15555c0b59a5d0f482c5f8761c08689befa8b5e7" Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.279827 4719 scope.go:117] "RemoveContainer" containerID="ea2e0aedb6e460d6913c049cc34b928b2842e09f0e0da5d28964fe327eedd685" Nov 24 09:19:07 crc kubenswrapper[4719]: E1124 09:19:07.280290 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea2e0aedb6e460d6913c049cc34b928b2842e09f0e0da5d28964fe327eedd685\": container with ID starting with ea2e0aedb6e460d6913c049cc34b928b2842e09f0e0da5d28964fe327eedd685 not found: ID does not exist" containerID="ea2e0aedb6e460d6913c049cc34b928b2842e09f0e0da5d28964fe327eedd685" Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.280338 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea2e0aedb6e460d6913c049cc34b928b2842e09f0e0da5d28964fe327eedd685"} err="failed to get container status \"ea2e0aedb6e460d6913c049cc34b928b2842e09f0e0da5d28964fe327eedd685\": rpc error: code = NotFound desc = could not find container \"ea2e0aedb6e460d6913c049cc34b928b2842e09f0e0da5d28964fe327eedd685\": container with ID starting with ea2e0aedb6e460d6913c049cc34b928b2842e09f0e0da5d28964fe327eedd685 not found: ID does not exist" Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.280369 4719 scope.go:117] "RemoveContainer" containerID="5d8ae08061d4be02bf9535252eac6f13cd7efe6e37d66bfdaed8c2d3e5361472" Nov 24 09:19:07 crc kubenswrapper[4719]: E1124 09:19:07.280903 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d8ae08061d4be02bf9535252eac6f13cd7efe6e37d66bfdaed8c2d3e5361472\": container with ID starting with 5d8ae08061d4be02bf9535252eac6f13cd7efe6e37d66bfdaed8c2d3e5361472 not found: ID does not exist" containerID="5d8ae08061d4be02bf9535252eac6f13cd7efe6e37d66bfdaed8c2d3e5361472" Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.280940 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d8ae08061d4be02bf9535252eac6f13cd7efe6e37d66bfdaed8c2d3e5361472"} err="failed to get container status \"5d8ae08061d4be02bf9535252eac6f13cd7efe6e37d66bfdaed8c2d3e5361472\": rpc error: code = NotFound desc = could not find container \"5d8ae08061d4be02bf9535252eac6f13cd7efe6e37d66bfdaed8c2d3e5361472\": container with ID starting with 5d8ae08061d4be02bf9535252eac6f13cd7efe6e37d66bfdaed8c2d3e5361472 not found: ID does not exist" Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.280968 4719 scope.go:117] "RemoveContainer" containerID="eac400c15fe2ccb24f65c9ce15555c0b59a5d0f482c5f8761c08689befa8b5e7" Nov 24 09:19:07 crc kubenswrapper[4719]: E1124 09:19:07.281275 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eac400c15fe2ccb24f65c9ce15555c0b59a5d0f482c5f8761c08689befa8b5e7\": container with ID starting with eac400c15fe2ccb24f65c9ce15555c0b59a5d0f482c5f8761c08689befa8b5e7 not found: ID does not exist" containerID="eac400c15fe2ccb24f65c9ce15555c0b59a5d0f482c5f8761c08689befa8b5e7" Nov 24 09:19:07 crc kubenswrapper[4719]: I1124 09:19:07.281312 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eac400c15fe2ccb24f65c9ce15555c0b59a5d0f482c5f8761c08689befa8b5e7"} err="failed to get container status \"eac400c15fe2ccb24f65c9ce15555c0b59a5d0f482c5f8761c08689befa8b5e7\": rpc error: code = NotFound desc = could not find container \"eac400c15fe2ccb24f65c9ce15555c0b59a5d0f482c5f8761c08689befa8b5e7\": container with ID starting with eac400c15fe2ccb24f65c9ce15555c0b59a5d0f482c5f8761c08689befa8b5e7 not found: ID does not exist" Nov 24 09:19:08 crc kubenswrapper[4719]: I1124 09:19:08.359409 4719 scope.go:117] "RemoveContainer" containerID="7fc42c7c86a587a0ed0efe8aab8087fd330fd8c28158791cd55e6f654d7ef46b" Nov 24 09:19:08 crc kubenswrapper[4719]: I1124 09:19:08.379892 4719 scope.go:117] "RemoveContainer" containerID="03545927ad1fd28a09891fa879432a2a0b76f00ebe7a275c8964293f20783476" Nov 24 09:19:08 crc kubenswrapper[4719]: I1124 09:19:08.403450 4719 scope.go:117] "RemoveContainer" containerID="d41c6cb18633057f7b541d63406248fc193811764cf93e42767185c63805fb47" Nov 24 09:19:08 crc kubenswrapper[4719]: I1124 09:19:08.424022 4719 scope.go:117] "RemoveContainer" containerID="8fe57bd90a844df1d7e8fda78ba86b2d321c47102563cf30c66e09bad452eda1" Nov 24 09:19:08 crc kubenswrapper[4719]: I1124 09:19:08.531589 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5317b0f9-ae2a-4514-b11a-f54d75bf09aa" path="/var/lib/kubelet/pods/5317b0f9-ae2a-4514-b11a-f54d75bf09aa/volumes" Nov 24 09:19:34 crc kubenswrapper[4719]: I1124 09:19:34.562335 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:19:34 crc kubenswrapper[4719]: I1124 09:19:34.562888 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:20:02 crc kubenswrapper[4719]: I1124 09:20:02.674658 4719 generic.go:334] "Generic (PLEG): container finished" podID="e0638669-2686-4194-b1e6-794b7eabacf6" containerID="c24fce013632f171e8bd789580522590bd564a809fd1bd6831b7865613ed2227" exitCode=0 Nov 24 09:20:02 crc kubenswrapper[4719]: I1124 09:20:02.674753 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" event={"ID":"e0638669-2686-4194-b1e6-794b7eabacf6","Type":"ContainerDied","Data":"c24fce013632f171e8bd789580522590bd564a809fd1bd6831b7865613ed2227"} Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.124420 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.204519 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5x29\" (UniqueName: \"kubernetes.io/projected/e0638669-2686-4194-b1e6-794b7eabacf6-kube-api-access-x5x29\") pod \"e0638669-2686-4194-b1e6-794b7eabacf6\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.204605 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-bootstrap-combined-ca-bundle\") pod \"e0638669-2686-4194-b1e6-794b7eabacf6\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.204826 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-inventory\") pod \"e0638669-2686-4194-b1e6-794b7eabacf6\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.204943 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-ssh-key\") pod \"e0638669-2686-4194-b1e6-794b7eabacf6\" (UID: \"e0638669-2686-4194-b1e6-794b7eabacf6\") " Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.210068 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e0638669-2686-4194-b1e6-794b7eabacf6" (UID: "e0638669-2686-4194-b1e6-794b7eabacf6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.210291 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0638669-2686-4194-b1e6-794b7eabacf6-kube-api-access-x5x29" (OuterVolumeSpecName: "kube-api-access-x5x29") pod "e0638669-2686-4194-b1e6-794b7eabacf6" (UID: "e0638669-2686-4194-b1e6-794b7eabacf6"). InnerVolumeSpecName "kube-api-access-x5x29". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.231212 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e0638669-2686-4194-b1e6-794b7eabacf6" (UID: "e0638669-2686-4194-b1e6-794b7eabacf6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.233633 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-inventory" (OuterVolumeSpecName: "inventory") pod "e0638669-2686-4194-b1e6-794b7eabacf6" (UID: "e0638669-2686-4194-b1e6-794b7eabacf6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.308068 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5x29\" (UniqueName: \"kubernetes.io/projected/e0638669-2686-4194-b1e6-794b7eabacf6-kube-api-access-x5x29\") on node \"crc\" DevicePath \"\"" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.308114 4719 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.308135 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.308148 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0638669-2686-4194-b1e6-794b7eabacf6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.562465 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.562541 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.699856 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.699826 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl" event={"ID":"e0638669-2686-4194-b1e6-794b7eabacf6","Type":"ContainerDied","Data":"811ce07af5fd06cb8a368896d35b720c41d396ccf65435a07cd4e9700187d06b"} Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.699960 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="811ce07af5fd06cb8a368896d35b720c41d396ccf65435a07cd4e9700187d06b" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.781988 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7"] Nov 24 09:20:04 crc kubenswrapper[4719]: E1124 09:20:04.782464 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0cf0c3a-98c7-4448-a705-50bac6f265c5" containerName="extract-content" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.782488 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0cf0c3a-98c7-4448-a705-50bac6f265c5" containerName="extract-content" Nov 24 09:20:04 crc kubenswrapper[4719]: E1124 09:20:04.782510 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5317b0f9-ae2a-4514-b11a-f54d75bf09aa" containerName="registry-server" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.782518 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5317b0f9-ae2a-4514-b11a-f54d75bf09aa" containerName="registry-server" Nov 24 09:20:04 crc kubenswrapper[4719]: E1124 09:20:04.782532 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5317b0f9-ae2a-4514-b11a-f54d75bf09aa" containerName="extract-utilities" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.782540 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5317b0f9-ae2a-4514-b11a-f54d75bf09aa" containerName="extract-utilities" Nov 24 09:20:04 crc kubenswrapper[4719]: E1124 09:20:04.782554 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5317b0f9-ae2a-4514-b11a-f54d75bf09aa" containerName="extract-content" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.782563 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5317b0f9-ae2a-4514-b11a-f54d75bf09aa" containerName="extract-content" Nov 24 09:20:04 crc kubenswrapper[4719]: E1124 09:20:04.782577 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0cf0c3a-98c7-4448-a705-50bac6f265c5" containerName="registry-server" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.782584 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0cf0c3a-98c7-4448-a705-50bac6f265c5" containerName="registry-server" Nov 24 09:20:04 crc kubenswrapper[4719]: E1124 09:20:04.782606 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0638669-2686-4194-b1e6-794b7eabacf6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.782616 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0638669-2686-4194-b1e6-794b7eabacf6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 09:20:04 crc kubenswrapper[4719]: E1124 09:20:04.782631 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0cf0c3a-98c7-4448-a705-50bac6f265c5" containerName="extract-utilities" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.782640 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0cf0c3a-98c7-4448-a705-50bac6f265c5" containerName="extract-utilities" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.782899 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0cf0c3a-98c7-4448-a705-50bac6f265c5" containerName="registry-server" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.782923 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0638669-2686-4194-b1e6-794b7eabacf6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.782946 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="5317b0f9-ae2a-4514-b11a-f54d75bf09aa" containerName="registry-server" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.783711 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.788525 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.788782 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.788976 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.789147 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.793552 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7"] Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.919801 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7\" (UID: \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.919861 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rbw8\" (UniqueName: \"kubernetes.io/projected/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-kube-api-access-6rbw8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7\" (UID: \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" Nov 24 09:20:04 crc kubenswrapper[4719]: I1124 09:20:04.919911 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7\" (UID: \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" Nov 24 09:20:05 crc kubenswrapper[4719]: I1124 09:20:05.021550 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7\" (UID: \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" Nov 24 09:20:05 crc kubenswrapper[4719]: I1124 09:20:05.021609 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rbw8\" (UniqueName: \"kubernetes.io/projected/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-kube-api-access-6rbw8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7\" (UID: \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" Nov 24 09:20:05 crc kubenswrapper[4719]: I1124 09:20:05.021661 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7\" (UID: \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" Nov 24 09:20:05 crc kubenswrapper[4719]: I1124 09:20:05.026392 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7\" (UID: \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" Nov 24 09:20:05 crc kubenswrapper[4719]: I1124 09:20:05.026495 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7\" (UID: \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" Nov 24 09:20:05 crc kubenswrapper[4719]: I1124 09:20:05.045075 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rbw8\" (UniqueName: \"kubernetes.io/projected/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-kube-api-access-6rbw8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7\" (UID: \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" Nov 24 09:20:05 crc kubenswrapper[4719]: I1124 09:20:05.116008 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" Nov 24 09:20:06 crc kubenswrapper[4719]: I1124 09:20:05.611479 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:20:06 crc kubenswrapper[4719]: I1124 09:20:05.620615 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7"] Nov 24 09:20:06 crc kubenswrapper[4719]: I1124 09:20:05.715456 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" event={"ID":"67c842f7-bace-4901-a2a7-b2e3ca12ff5e","Type":"ContainerStarted","Data":"63b82194179d3b3c51d8ad72cc9cd6bc2f954771f9ed8bf7f776fc03a84f9c96"} Nov 24 09:20:06 crc kubenswrapper[4719]: I1124 09:20:06.729426 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" event={"ID":"67c842f7-bace-4901-a2a7-b2e3ca12ff5e","Type":"ContainerStarted","Data":"c89f8519f492c64d9ebba9faa6d076032ace204c019d81e8ea3cea13dea82ef1"} Nov 24 09:20:06 crc kubenswrapper[4719]: I1124 09:20:06.750637 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" podStartSLOduration=2.293170223 podStartE2EDuration="2.750620239s" podCreationTimestamp="2025-11-24 09:20:04 +0000 UTC" firstStartedPulling="2025-11-24 09:20:05.611213415 +0000 UTC m=+1581.942486677" lastFinishedPulling="2025-11-24 09:20:06.068663431 +0000 UTC m=+1582.399936693" observedRunningTime="2025-11-24 09:20:06.749347382 +0000 UTC m=+1583.080620644" watchObservedRunningTime="2025-11-24 09:20:06.750620239 +0000 UTC m=+1583.081893491" Nov 24 09:20:08 crc kubenswrapper[4719]: I1124 09:20:08.513577 4719 scope.go:117] "RemoveContainer" containerID="12f52c576432d36b008139cbd30750731a31b8112afe813b2c99b6fb70dc080c" Nov 24 09:20:08 crc kubenswrapper[4719]: I1124 09:20:08.589619 4719 scope.go:117] "RemoveContainer" containerID="20fa245275e86ed49762075c7b334308c43a4732d262cd900534d4cd9c19a39f" Nov 24 09:20:08 crc kubenswrapper[4719]: I1124 09:20:08.615969 4719 scope.go:117] "RemoveContainer" containerID="43fea711240764429e6a9ab28d7fd1e0e45e9905e5cf1936db6b6da1e2276717" Nov 24 09:20:08 crc kubenswrapper[4719]: I1124 09:20:08.676718 4719 scope.go:117] "RemoveContainer" containerID="b0ea798408b88995b596effd4c5987ed9b8ef43ae39a446cbae0909365064051" Nov 24 09:20:28 crc kubenswrapper[4719]: I1124 09:20:28.739441 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v7shx"] Nov 24 09:20:28 crc kubenswrapper[4719]: I1124 09:20:28.742150 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:28 crc kubenswrapper[4719]: I1124 09:20:28.788089 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7shx"] Nov 24 09:20:28 crc kubenswrapper[4719]: I1124 09:20:28.919237 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49908e7c-1b8c-4009-ae2b-daa7620d2a19-utilities\") pod \"redhat-marketplace-v7shx\" (UID: \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\") " pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:28 crc kubenswrapper[4719]: I1124 09:20:28.919295 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnmwv\" (UniqueName: \"kubernetes.io/projected/49908e7c-1b8c-4009-ae2b-daa7620d2a19-kube-api-access-qnmwv\") pod \"redhat-marketplace-v7shx\" (UID: \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\") " pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:28 crc kubenswrapper[4719]: I1124 09:20:28.919596 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49908e7c-1b8c-4009-ae2b-daa7620d2a19-catalog-content\") pod \"redhat-marketplace-v7shx\" (UID: \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\") " pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:29 crc kubenswrapper[4719]: I1124 09:20:29.021471 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49908e7c-1b8c-4009-ae2b-daa7620d2a19-utilities\") pod \"redhat-marketplace-v7shx\" (UID: \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\") " pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:29 crc kubenswrapper[4719]: I1124 09:20:29.022015 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnmwv\" (UniqueName: \"kubernetes.io/projected/49908e7c-1b8c-4009-ae2b-daa7620d2a19-kube-api-access-qnmwv\") pod \"redhat-marketplace-v7shx\" (UID: \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\") " pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:29 crc kubenswrapper[4719]: I1124 09:20:29.022256 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49908e7c-1b8c-4009-ae2b-daa7620d2a19-catalog-content\") pod \"redhat-marketplace-v7shx\" (UID: \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\") " pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:29 crc kubenswrapper[4719]: I1124 09:20:29.022344 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49908e7c-1b8c-4009-ae2b-daa7620d2a19-utilities\") pod \"redhat-marketplace-v7shx\" (UID: \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\") " pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:29 crc kubenswrapper[4719]: I1124 09:20:29.022826 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49908e7c-1b8c-4009-ae2b-daa7620d2a19-catalog-content\") pod \"redhat-marketplace-v7shx\" (UID: \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\") " pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:29 crc kubenswrapper[4719]: I1124 09:20:29.048534 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnmwv\" (UniqueName: \"kubernetes.io/projected/49908e7c-1b8c-4009-ae2b-daa7620d2a19-kube-api-access-qnmwv\") pod \"redhat-marketplace-v7shx\" (UID: \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\") " pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:29 crc kubenswrapper[4719]: I1124 09:20:29.084808 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:29 crc kubenswrapper[4719]: I1124 09:20:29.559846 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7shx"] Nov 24 09:20:29 crc kubenswrapper[4719]: I1124 09:20:29.959677 4719 generic.go:334] "Generic (PLEG): container finished" podID="49908e7c-1b8c-4009-ae2b-daa7620d2a19" containerID="95946a6139ae27a6f07e70e398c3d135954a3dc6e4cd2fa2e0b273347e8d7a84" exitCode=0 Nov 24 09:20:29 crc kubenswrapper[4719]: I1124 09:20:29.959846 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7shx" event={"ID":"49908e7c-1b8c-4009-ae2b-daa7620d2a19","Type":"ContainerDied","Data":"95946a6139ae27a6f07e70e398c3d135954a3dc6e4cd2fa2e0b273347e8d7a84"} Nov 24 09:20:29 crc kubenswrapper[4719]: I1124 09:20:29.959987 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7shx" event={"ID":"49908e7c-1b8c-4009-ae2b-daa7620d2a19","Type":"ContainerStarted","Data":"2e48a346fdf38649daab101d6e90f5adf3f28d89c7897600bf271848ab45d9b2"} Nov 24 09:20:30 crc kubenswrapper[4719]: I1124 09:20:30.975611 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7shx" event={"ID":"49908e7c-1b8c-4009-ae2b-daa7620d2a19","Type":"ContainerStarted","Data":"da90af02bc6c565312d7f91ce6b20382f8f07805a71fefa5424cc5cee4abd7fc"} Nov 24 09:20:31 crc kubenswrapper[4719]: I1124 09:20:31.041305 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-892d-account-create-rtmf4"] Nov 24 09:20:31 crc kubenswrapper[4719]: I1124 09:20:31.052820 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3bfe-account-create-l5xls"] Nov 24 09:20:31 crc kubenswrapper[4719]: I1124 09:20:31.061376 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-2hszz"] Nov 24 09:20:31 crc kubenswrapper[4719]: I1124 09:20:31.074283 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-qbjmv"] Nov 24 09:20:31 crc kubenswrapper[4719]: I1124 09:20:31.083753 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-qbjmv"] Nov 24 09:20:31 crc kubenswrapper[4719]: I1124 09:20:31.091791 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3bfe-account-create-l5xls"] Nov 24 09:20:31 crc kubenswrapper[4719]: I1124 09:20:31.098652 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-2hszz"] Nov 24 09:20:31 crc kubenswrapper[4719]: I1124 09:20:31.105431 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-892d-account-create-rtmf4"] Nov 24 09:20:31 crc kubenswrapper[4719]: I1124 09:20:31.986356 4719 generic.go:334] "Generic (PLEG): container finished" podID="49908e7c-1b8c-4009-ae2b-daa7620d2a19" containerID="da90af02bc6c565312d7f91ce6b20382f8f07805a71fefa5424cc5cee4abd7fc" exitCode=0 Nov 24 09:20:31 crc kubenswrapper[4719]: I1124 09:20:31.986432 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7shx" event={"ID":"49908e7c-1b8c-4009-ae2b-daa7620d2a19","Type":"ContainerDied","Data":"da90af02bc6c565312d7f91ce6b20382f8f07805a71fefa5424cc5cee4abd7fc"} Nov 24 09:20:32 crc kubenswrapper[4719]: I1124 09:20:32.535824 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17d86e1d-9f0f-4aec-a19c-a02a05a34319" path="/var/lib/kubelet/pods/17d86e1d-9f0f-4aec-a19c-a02a05a34319/volumes" Nov 24 09:20:32 crc kubenswrapper[4719]: I1124 09:20:32.536682 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="230fc0d1-ff11-476a-82be-177f83a0e81f" path="/var/lib/kubelet/pods/230fc0d1-ff11-476a-82be-177f83a0e81f/volumes" Nov 24 09:20:32 crc kubenswrapper[4719]: I1124 09:20:32.537442 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e9b95eb-5130-4d13-9557-fe979505e602" path="/var/lib/kubelet/pods/6e9b95eb-5130-4d13-9557-fe979505e602/volumes" Nov 24 09:20:32 crc kubenswrapper[4719]: I1124 09:20:32.538211 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e32e613-504a-4221-a5ea-29c4768e4ef9" path="/var/lib/kubelet/pods/9e32e613-504a-4221-a5ea-29c4768e4ef9/volumes" Nov 24 09:20:34 crc kubenswrapper[4719]: I1124 09:20:34.009122 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7shx" event={"ID":"49908e7c-1b8c-4009-ae2b-daa7620d2a19","Type":"ContainerStarted","Data":"efd30f5c6ef5844cd1b2e687df985aa83bd4e7d761a47f826f1358d4c57d1d6c"} Nov 24 09:20:34 crc kubenswrapper[4719]: I1124 09:20:34.047111 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v7shx" podStartSLOduration=2.330633843 podStartE2EDuration="6.047086315s" podCreationTimestamp="2025-11-24 09:20:28 +0000 UTC" firstStartedPulling="2025-11-24 09:20:29.962815864 +0000 UTC m=+1606.294089116" lastFinishedPulling="2025-11-24 09:20:33.679268336 +0000 UTC m=+1610.010541588" observedRunningTime="2025-11-24 09:20:34.045450398 +0000 UTC m=+1610.376723680" watchObservedRunningTime="2025-11-24 09:20:34.047086315 +0000 UTC m=+1610.378359597" Nov 24 09:20:34 crc kubenswrapper[4719]: I1124 09:20:34.561737 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:20:34 crc kubenswrapper[4719]: I1124 09:20:34.561834 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:20:34 crc kubenswrapper[4719]: I1124 09:20:34.561909 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 09:20:34 crc kubenswrapper[4719]: I1124 09:20:34.563113 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:20:34 crc kubenswrapper[4719]: I1124 09:20:34.563436 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" gracePeriod=600 Nov 24 09:20:34 crc kubenswrapper[4719]: E1124 09:20:34.680378 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:20:35 crc kubenswrapper[4719]: I1124 09:20:35.020724 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" exitCode=0 Nov 24 09:20:35 crc kubenswrapper[4719]: I1124 09:20:35.020825 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b"} Nov 24 09:20:35 crc kubenswrapper[4719]: I1124 09:20:35.021592 4719 scope.go:117] "RemoveContainer" containerID="b04d639d9aa1ad87769535c446009de2717540d226e0b11055a32fbdd9893eb6" Nov 24 09:20:35 crc kubenswrapper[4719]: I1124 09:20:35.022248 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:20:35 crc kubenswrapper[4719]: E1124 09:20:35.022732 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:20:35 crc kubenswrapper[4719]: I1124 09:20:35.044519 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jx67j"] Nov 24 09:20:35 crc kubenswrapper[4719]: I1124 09:20:35.053250 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-36b5-account-create-vwrrf"] Nov 24 09:20:35 crc kubenswrapper[4719]: I1124 09:20:35.068870 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jx67j"] Nov 24 09:20:35 crc kubenswrapper[4719]: I1124 09:20:35.078220 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-36b5-account-create-vwrrf"] Nov 24 09:20:36 crc kubenswrapper[4719]: I1124 09:20:36.531559 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="362cf151-7819-46b5-9b25-2f42aa6370ac" path="/var/lib/kubelet/pods/362cf151-7819-46b5-9b25-2f42aa6370ac/volumes" Nov 24 09:20:36 crc kubenswrapper[4719]: I1124 09:20:36.532425 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f6fad86-e72c-41c1-8322-614721929c2a" path="/var/lib/kubelet/pods/3f6fad86-e72c-41c1-8322-614721929c2a/volumes" Nov 24 09:20:39 crc kubenswrapper[4719]: I1124 09:20:39.085649 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:39 crc kubenswrapper[4719]: I1124 09:20:39.086001 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:39 crc kubenswrapper[4719]: I1124 09:20:39.133896 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:40 crc kubenswrapper[4719]: I1124 09:20:40.112417 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:40 crc kubenswrapper[4719]: I1124 09:20:40.175337 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7shx"] Nov 24 09:20:42 crc kubenswrapper[4719]: I1124 09:20:42.077725 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v7shx" podUID="49908e7c-1b8c-4009-ae2b-daa7620d2a19" containerName="registry-server" containerID="cri-o://efd30f5c6ef5844cd1b2e687df985aa83bd4e7d761a47f826f1358d4c57d1d6c" gracePeriod=2 Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.038077 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.091604 4719 generic.go:334] "Generic (PLEG): container finished" podID="49908e7c-1b8c-4009-ae2b-daa7620d2a19" containerID="efd30f5c6ef5844cd1b2e687df985aa83bd4e7d761a47f826f1358d4c57d1d6c" exitCode=0 Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.091664 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7shx" event={"ID":"49908e7c-1b8c-4009-ae2b-daa7620d2a19","Type":"ContainerDied","Data":"efd30f5c6ef5844cd1b2e687df985aa83bd4e7d761a47f826f1358d4c57d1d6c"} Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.091694 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7shx" event={"ID":"49908e7c-1b8c-4009-ae2b-daa7620d2a19","Type":"ContainerDied","Data":"2e48a346fdf38649daab101d6e90f5adf3f28d89c7897600bf271848ab45d9b2"} Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.091713 4719 scope.go:117] "RemoveContainer" containerID="efd30f5c6ef5844cd1b2e687df985aa83bd4e7d761a47f826f1358d4c57d1d6c" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.091910 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7shx" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.110659 4719 scope.go:117] "RemoveContainer" containerID="da90af02bc6c565312d7f91ce6b20382f8f07805a71fefa5424cc5cee4abd7fc" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.131508 4719 scope.go:117] "RemoveContainer" containerID="95946a6139ae27a6f07e70e398c3d135954a3dc6e4cd2fa2e0b273347e8d7a84" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.170607 4719 scope.go:117] "RemoveContainer" containerID="efd30f5c6ef5844cd1b2e687df985aa83bd4e7d761a47f826f1358d4c57d1d6c" Nov 24 09:20:43 crc kubenswrapper[4719]: E1124 09:20:43.171427 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd30f5c6ef5844cd1b2e687df985aa83bd4e7d761a47f826f1358d4c57d1d6c\": container with ID starting with efd30f5c6ef5844cd1b2e687df985aa83bd4e7d761a47f826f1358d4c57d1d6c not found: ID does not exist" containerID="efd30f5c6ef5844cd1b2e687df985aa83bd4e7d761a47f826f1358d4c57d1d6c" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.171834 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd30f5c6ef5844cd1b2e687df985aa83bd4e7d761a47f826f1358d4c57d1d6c"} err="failed to get container status \"efd30f5c6ef5844cd1b2e687df985aa83bd4e7d761a47f826f1358d4c57d1d6c\": rpc error: code = NotFound desc = could not find container \"efd30f5c6ef5844cd1b2e687df985aa83bd4e7d761a47f826f1358d4c57d1d6c\": container with ID starting with efd30f5c6ef5844cd1b2e687df985aa83bd4e7d761a47f826f1358d4c57d1d6c not found: ID does not exist" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.171948 4719 scope.go:117] "RemoveContainer" containerID="da90af02bc6c565312d7f91ce6b20382f8f07805a71fefa5424cc5cee4abd7fc" Nov 24 09:20:43 crc kubenswrapper[4719]: E1124 09:20:43.172386 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da90af02bc6c565312d7f91ce6b20382f8f07805a71fefa5424cc5cee4abd7fc\": container with ID starting with da90af02bc6c565312d7f91ce6b20382f8f07805a71fefa5424cc5cee4abd7fc not found: ID does not exist" containerID="da90af02bc6c565312d7f91ce6b20382f8f07805a71fefa5424cc5cee4abd7fc" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.172419 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da90af02bc6c565312d7f91ce6b20382f8f07805a71fefa5424cc5cee4abd7fc"} err="failed to get container status \"da90af02bc6c565312d7f91ce6b20382f8f07805a71fefa5424cc5cee4abd7fc\": rpc error: code = NotFound desc = could not find container \"da90af02bc6c565312d7f91ce6b20382f8f07805a71fefa5424cc5cee4abd7fc\": container with ID starting with da90af02bc6c565312d7f91ce6b20382f8f07805a71fefa5424cc5cee4abd7fc not found: ID does not exist" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.172440 4719 scope.go:117] "RemoveContainer" containerID="95946a6139ae27a6f07e70e398c3d135954a3dc6e4cd2fa2e0b273347e8d7a84" Nov 24 09:20:43 crc kubenswrapper[4719]: E1124 09:20:43.172628 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95946a6139ae27a6f07e70e398c3d135954a3dc6e4cd2fa2e0b273347e8d7a84\": container with ID starting with 95946a6139ae27a6f07e70e398c3d135954a3dc6e4cd2fa2e0b273347e8d7a84 not found: ID does not exist" containerID="95946a6139ae27a6f07e70e398c3d135954a3dc6e4cd2fa2e0b273347e8d7a84" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.172653 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95946a6139ae27a6f07e70e398c3d135954a3dc6e4cd2fa2e0b273347e8d7a84"} err="failed to get container status \"95946a6139ae27a6f07e70e398c3d135954a3dc6e4cd2fa2e0b273347e8d7a84\": rpc error: code = NotFound desc = could not find container \"95946a6139ae27a6f07e70e398c3d135954a3dc6e4cd2fa2e0b273347e8d7a84\": container with ID starting with 95946a6139ae27a6f07e70e398c3d135954a3dc6e4cd2fa2e0b273347e8d7a84 not found: ID does not exist" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.180183 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnmwv\" (UniqueName: \"kubernetes.io/projected/49908e7c-1b8c-4009-ae2b-daa7620d2a19-kube-api-access-qnmwv\") pod \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\" (UID: \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\") " Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.180266 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49908e7c-1b8c-4009-ae2b-daa7620d2a19-catalog-content\") pod \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\" (UID: \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\") " Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.184392 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49908e7c-1b8c-4009-ae2b-daa7620d2a19-utilities\") pod \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\" (UID: \"49908e7c-1b8c-4009-ae2b-daa7620d2a19\") " Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.185270 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49908e7c-1b8c-4009-ae2b-daa7620d2a19-utilities" (OuterVolumeSpecName: "utilities") pod "49908e7c-1b8c-4009-ae2b-daa7620d2a19" (UID: "49908e7c-1b8c-4009-ae2b-daa7620d2a19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.187721 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49908e7c-1b8c-4009-ae2b-daa7620d2a19-kube-api-access-qnmwv" (OuterVolumeSpecName: "kube-api-access-qnmwv") pod "49908e7c-1b8c-4009-ae2b-daa7620d2a19" (UID: "49908e7c-1b8c-4009-ae2b-daa7620d2a19"). InnerVolumeSpecName "kube-api-access-qnmwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.198960 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49908e7c-1b8c-4009-ae2b-daa7620d2a19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49908e7c-1b8c-4009-ae2b-daa7620d2a19" (UID: "49908e7c-1b8c-4009-ae2b-daa7620d2a19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.286823 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49908e7c-1b8c-4009-ae2b-daa7620d2a19-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.286852 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnmwv\" (UniqueName: \"kubernetes.io/projected/49908e7c-1b8c-4009-ae2b-daa7620d2a19-kube-api-access-qnmwv\") on node \"crc\" DevicePath \"\"" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.286862 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49908e7c-1b8c-4009-ae2b-daa7620d2a19-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.441636 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7shx"] Nov 24 09:20:43 crc kubenswrapper[4719]: I1124 09:20:43.459991 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7shx"] Nov 24 09:20:44 crc kubenswrapper[4719]: I1124 09:20:44.538930 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49908e7c-1b8c-4009-ae2b-daa7620d2a19" path="/var/lib/kubelet/pods/49908e7c-1b8c-4009-ae2b-daa7620d2a19/volumes" Nov 24 09:20:49 crc kubenswrapper[4719]: I1124 09:20:49.520541 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:20:49 crc kubenswrapper[4719]: E1124 09:20:49.521228 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:20:59 crc kubenswrapper[4719]: I1124 09:20:59.036875 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bp7gj"] Nov 24 09:20:59 crc kubenswrapper[4719]: I1124 09:20:59.044951 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bp7gj"] Nov 24 09:21:00 crc kubenswrapper[4719]: I1124 09:21:00.532849 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="614a41e1-aa75-4eff-818d-cd0686bc73b0" path="/var/lib/kubelet/pods/614a41e1-aa75-4eff-818d-cd0686bc73b0/volumes" Nov 24 09:21:03 crc kubenswrapper[4719]: I1124 09:21:03.521580 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:21:03 crc kubenswrapper[4719]: E1124 09:21:03.523203 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:21:07 crc kubenswrapper[4719]: I1124 09:21:07.046452 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0ea0-account-create-ckhf9"] Nov 24 09:21:07 crc kubenswrapper[4719]: I1124 09:21:07.056558 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-488f-account-create-zckr4"] Nov 24 09:21:07 crc kubenswrapper[4719]: I1124 09:21:07.065338 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-x4kkz"] Nov 24 09:21:07 crc kubenswrapper[4719]: I1124 09:21:07.074621 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0ea0-account-create-ckhf9"] Nov 24 09:21:07 crc kubenswrapper[4719]: I1124 09:21:07.082756 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-x4kkz"] Nov 24 09:21:07 crc kubenswrapper[4719]: I1124 09:21:07.090711 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-488f-account-create-zckr4"] Nov 24 09:21:07 crc kubenswrapper[4719]: I1124 09:21:07.097183 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-b51e-account-create-chwq5"] Nov 24 09:21:07 crc kubenswrapper[4719]: I1124 09:21:07.103689 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xtx8w"] Nov 24 09:21:07 crc kubenswrapper[4719]: I1124 09:21:07.110435 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-kknhq"] Nov 24 09:21:07 crc kubenswrapper[4719]: I1124 09:21:07.117504 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-b51e-account-create-chwq5"] Nov 24 09:21:07 crc kubenswrapper[4719]: I1124 09:21:07.123924 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xtx8w"] Nov 24 09:21:07 crc kubenswrapper[4719]: I1124 09:21:07.132357 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-kknhq"] Nov 24 09:21:08 crc kubenswrapper[4719]: I1124 09:21:08.534510 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28882cd2-f05b-4e9a-8e96-1c49236337db" path="/var/lib/kubelet/pods/28882cd2-f05b-4e9a-8e96-1c49236337db/volumes" Nov 24 09:21:08 crc kubenswrapper[4719]: I1124 09:21:08.535720 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b" path="/var/lib/kubelet/pods/5a11d0ea-ed2f-4fa2-bcd9-e91d22b0478b/volumes" Nov 24 09:21:08 crc kubenswrapper[4719]: I1124 09:21:08.536677 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4c080d6-f9b4-42d9-a09c-efad1904b2cf" path="/var/lib/kubelet/pods/a4c080d6-f9b4-42d9-a09c-efad1904b2cf/volumes" Nov 24 09:21:08 crc kubenswrapper[4719]: I1124 09:21:08.537578 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be421b32-1776-4720-b49e-0188e6cbad0f" path="/var/lib/kubelet/pods/be421b32-1776-4720-b49e-0188e6cbad0f/volumes" Nov 24 09:21:08 crc kubenswrapper[4719]: I1124 09:21:08.539285 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e008dc82-a46e-4cb3-b2c7-d05598f51373" path="/var/lib/kubelet/pods/e008dc82-a46e-4cb3-b2c7-d05598f51373/volumes" Nov 24 09:21:08 crc kubenswrapper[4719]: I1124 09:21:08.540012 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcfb8371-3ece-4ec3-871c-d9eb12e4eb58" path="/var/lib/kubelet/pods/fcfb8371-3ece-4ec3-871c-d9eb12e4eb58/volumes" Nov 24 09:21:08 crc kubenswrapper[4719]: I1124 09:21:08.806502 4719 scope.go:117] "RemoveContainer" containerID="e3e2f7e1de4576458f3052e3486213a2242e885e5e1316121c34f5d097b4fcef" Nov 24 09:21:08 crc kubenswrapper[4719]: I1124 09:21:08.841812 4719 scope.go:117] "RemoveContainer" containerID="f356a860b28f62e883b02e18d85a49ed993149b81f030cce90785ddf239c56ce" Nov 24 09:21:08 crc kubenswrapper[4719]: I1124 09:21:08.904697 4719 scope.go:117] "RemoveContainer" containerID="3516c77303a15e0a2dbdc863658ea007d3438f722ddcee5e99c75463c8a928e4" Nov 24 09:21:08 crc kubenswrapper[4719]: I1124 09:21:08.938772 4719 scope.go:117] "RemoveContainer" containerID="86a840314ef2a6eac6790008c5fb77711b8e35643341882568031a4b44a17e9e" Nov 24 09:21:08 crc kubenswrapper[4719]: I1124 09:21:08.992328 4719 scope.go:117] "RemoveContainer" containerID="c36f2fb55a6a9a55d0f711e2f83b034aeed25958d39828966e4b28b5b463cca2" Nov 24 09:21:09 crc kubenswrapper[4719]: I1124 09:21:09.085498 4719 scope.go:117] "RemoveContainer" containerID="2e411a4c5763552bd33d79cfac2eb365a29bb37527babb63883fc74366bb2565" Nov 24 09:21:09 crc kubenswrapper[4719]: I1124 09:21:09.110846 4719 scope.go:117] "RemoveContainer" containerID="3e9bb87b7ae6edde755e9a7f64e058c546a81c25c3c38ce0ccd29af7f89bc40c" Nov 24 09:21:09 crc kubenswrapper[4719]: I1124 09:21:09.130545 4719 scope.go:117] "RemoveContainer" containerID="31e009c359a2805feace323364cd3fc336cfdaa32b8b6cdfd630de3f46e13e8e" Nov 24 09:21:09 crc kubenswrapper[4719]: I1124 09:21:09.166566 4719 scope.go:117] "RemoveContainer" containerID="c57f66d7b83e885df01da0887588bcd8d5ba9de9349303fdc2352d89464ce644" Nov 24 09:21:09 crc kubenswrapper[4719]: I1124 09:21:09.185360 4719 scope.go:117] "RemoveContainer" containerID="e29bee50aa3a67544b73ad8537d937852c7a176571fc24018ee61a8b15b59ed4" Nov 24 09:21:09 crc kubenswrapper[4719]: I1124 09:21:09.211750 4719 scope.go:117] "RemoveContainer" containerID="651ef74065ee33b2f85b28c87044ef020143932f55463ff20813d1420b44021b" Nov 24 09:21:09 crc kubenswrapper[4719]: I1124 09:21:09.229474 4719 scope.go:117] "RemoveContainer" containerID="6f50a492a449685aac62f4cd929b3ea899cadc55f3b5d2b1e0880ae72e9a3b2d" Nov 24 09:21:09 crc kubenswrapper[4719]: I1124 09:21:09.247363 4719 scope.go:117] "RemoveContainer" containerID="3cdc16f819fb81378f78092adf291bd1d2869a5d97e109d35ff9fe78567b2521" Nov 24 09:21:16 crc kubenswrapper[4719]: I1124 09:21:16.028070 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-bx8fg"] Nov 24 09:21:16 crc kubenswrapper[4719]: I1124 09:21:16.035322 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-bx8fg"] Nov 24 09:21:16 crc kubenswrapper[4719]: I1124 09:21:16.534050 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16010248-d22e-4551-a3ba-f8b61f6ae440" path="/var/lib/kubelet/pods/16010248-d22e-4551-a3ba-f8b61f6ae440/volumes" Nov 24 09:21:17 crc kubenswrapper[4719]: I1124 09:21:17.462344 4719 generic.go:334] "Generic (PLEG): container finished" podID="67c842f7-bace-4901-a2a7-b2e3ca12ff5e" containerID="c89f8519f492c64d9ebba9faa6d076032ace204c019d81e8ea3cea13dea82ef1" exitCode=0 Nov 24 09:21:17 crc kubenswrapper[4719]: I1124 09:21:17.462417 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" event={"ID":"67c842f7-bace-4901-a2a7-b2e3ca12ff5e","Type":"ContainerDied","Data":"c89f8519f492c64d9ebba9faa6d076032ace204c019d81e8ea3cea13dea82ef1"} Nov 24 09:21:18 crc kubenswrapper[4719]: I1124 09:21:18.521073 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:21:18 crc kubenswrapper[4719]: E1124 09:21:18.521306 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:21:18 crc kubenswrapper[4719]: I1124 09:21:18.924277 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.023389 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rbw8\" (UniqueName: \"kubernetes.io/projected/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-kube-api-access-6rbw8\") pod \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\" (UID: \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\") " Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.023487 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-inventory\") pod \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\" (UID: \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\") " Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.023568 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-ssh-key\") pod \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\" (UID: \"67c842f7-bace-4901-a2a7-b2e3ca12ff5e\") " Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.028950 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-kube-api-access-6rbw8" (OuterVolumeSpecName: "kube-api-access-6rbw8") pod "67c842f7-bace-4901-a2a7-b2e3ca12ff5e" (UID: "67c842f7-bace-4901-a2a7-b2e3ca12ff5e"). InnerVolumeSpecName "kube-api-access-6rbw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.049318 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "67c842f7-bace-4901-a2a7-b2e3ca12ff5e" (UID: "67c842f7-bace-4901-a2a7-b2e3ca12ff5e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.056529 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-inventory" (OuterVolumeSpecName: "inventory") pod "67c842f7-bace-4901-a2a7-b2e3ca12ff5e" (UID: "67c842f7-bace-4901-a2a7-b2e3ca12ff5e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.125494 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.125754 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.125837 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rbw8\" (UniqueName: \"kubernetes.io/projected/67c842f7-bace-4901-a2a7-b2e3ca12ff5e-kube-api-access-6rbw8\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.480433 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" event={"ID":"67c842f7-bace-4901-a2a7-b2e3ca12ff5e","Type":"ContainerDied","Data":"63b82194179d3b3c51d8ad72cc9cd6bc2f954771f9ed8bf7f776fc03a84f9c96"} Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.480463 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.480480 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63b82194179d3b3c51d8ad72cc9cd6bc2f954771f9ed8bf7f776fc03a84f9c96" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.567837 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h"] Nov 24 09:21:19 crc kubenswrapper[4719]: E1124 09:21:19.568186 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49908e7c-1b8c-4009-ae2b-daa7620d2a19" containerName="extract-content" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.568198 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="49908e7c-1b8c-4009-ae2b-daa7620d2a19" containerName="extract-content" Nov 24 09:21:19 crc kubenswrapper[4719]: E1124 09:21:19.568212 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c842f7-bace-4901-a2a7-b2e3ca12ff5e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.568220 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c842f7-bace-4901-a2a7-b2e3ca12ff5e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:19 crc kubenswrapper[4719]: E1124 09:21:19.568246 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49908e7c-1b8c-4009-ae2b-daa7620d2a19" containerName="registry-server" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.568252 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="49908e7c-1b8c-4009-ae2b-daa7620d2a19" containerName="registry-server" Nov 24 09:21:19 crc kubenswrapper[4719]: E1124 09:21:19.568271 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49908e7c-1b8c-4009-ae2b-daa7620d2a19" containerName="extract-utilities" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.568277 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="49908e7c-1b8c-4009-ae2b-daa7620d2a19" containerName="extract-utilities" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.572585 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c842f7-bace-4901-a2a7-b2e3ca12ff5e" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.572628 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="49908e7c-1b8c-4009-ae2b-daa7620d2a19" containerName="registry-server" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.573272 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.576606 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.576820 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.576953 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.577100 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.588709 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h"] Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.638079 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r52qr\" (UniqueName: \"kubernetes.io/projected/e7330eb5-2c71-4ee9-b835-72cc930cecdd-kube-api-access-r52qr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ft68h\" (UID: \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.638177 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7330eb5-2c71-4ee9-b835-72cc930cecdd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ft68h\" (UID: \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.638263 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7330eb5-2c71-4ee9-b835-72cc930cecdd-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ft68h\" (UID: \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.740661 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r52qr\" (UniqueName: \"kubernetes.io/projected/e7330eb5-2c71-4ee9-b835-72cc930cecdd-kube-api-access-r52qr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ft68h\" (UID: \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.740727 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7330eb5-2c71-4ee9-b835-72cc930cecdd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ft68h\" (UID: \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.740774 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7330eb5-2c71-4ee9-b835-72cc930cecdd-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ft68h\" (UID: \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.751051 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7330eb5-2c71-4ee9-b835-72cc930cecdd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ft68h\" (UID: \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.753504 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7330eb5-2c71-4ee9-b835-72cc930cecdd-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ft68h\" (UID: \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.759400 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r52qr\" (UniqueName: \"kubernetes.io/projected/e7330eb5-2c71-4ee9-b835-72cc930cecdd-kube-api-access-r52qr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ft68h\" (UID: \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" Nov 24 09:21:19 crc kubenswrapper[4719]: I1124 09:21:19.895603 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" Nov 24 09:21:20 crc kubenswrapper[4719]: I1124 09:21:20.453259 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h"] Nov 24 09:21:20 crc kubenswrapper[4719]: I1124 09:21:20.491588 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" event={"ID":"e7330eb5-2c71-4ee9-b835-72cc930cecdd","Type":"ContainerStarted","Data":"38ce09103522b2d7a9efdb4897573cbe44be576fb054435f08fe94e01a47abf7"} Nov 24 09:21:21 crc kubenswrapper[4719]: I1124 09:21:21.508177 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" event={"ID":"e7330eb5-2c71-4ee9-b835-72cc930cecdd","Type":"ContainerStarted","Data":"117a45896fcdf118e58703d73e254195d7d7d52e29dd4cb1f3e15184cd223ac0"} Nov 24 09:21:21 crc kubenswrapper[4719]: I1124 09:21:21.529640 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" podStartSLOduration=1.9089615960000001 podStartE2EDuration="2.529622205s" podCreationTimestamp="2025-11-24 09:21:19 +0000 UTC" firstStartedPulling="2025-11-24 09:21:20.455202267 +0000 UTC m=+1656.786475519" lastFinishedPulling="2025-11-24 09:21:21.075862866 +0000 UTC m=+1657.407136128" observedRunningTime="2025-11-24 09:21:21.527439162 +0000 UTC m=+1657.858712454" watchObservedRunningTime="2025-11-24 09:21:21.529622205 +0000 UTC m=+1657.860895457" Nov 24 09:21:26 crc kubenswrapper[4719]: I1124 09:21:26.563908 4719 generic.go:334] "Generic (PLEG): container finished" podID="e7330eb5-2c71-4ee9-b835-72cc930cecdd" containerID="117a45896fcdf118e58703d73e254195d7d7d52e29dd4cb1f3e15184cd223ac0" exitCode=0 Nov 24 09:21:26 crc kubenswrapper[4719]: I1124 09:21:26.564004 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" event={"ID":"e7330eb5-2c71-4ee9-b835-72cc930cecdd","Type":"ContainerDied","Data":"117a45896fcdf118e58703d73e254195d7d7d52e29dd4cb1f3e15184cd223ac0"} Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.022444 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.091575 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7330eb5-2c71-4ee9-b835-72cc930cecdd-inventory\") pod \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\" (UID: \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\") " Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.091712 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7330eb5-2c71-4ee9-b835-72cc930cecdd-ssh-key\") pod \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\" (UID: \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\") " Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.091886 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r52qr\" (UniqueName: \"kubernetes.io/projected/e7330eb5-2c71-4ee9-b835-72cc930cecdd-kube-api-access-r52qr\") pod \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\" (UID: \"e7330eb5-2c71-4ee9-b835-72cc930cecdd\") " Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.099239 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7330eb5-2c71-4ee9-b835-72cc930cecdd-kube-api-access-r52qr" (OuterVolumeSpecName: "kube-api-access-r52qr") pod "e7330eb5-2c71-4ee9-b835-72cc930cecdd" (UID: "e7330eb5-2c71-4ee9-b835-72cc930cecdd"). InnerVolumeSpecName "kube-api-access-r52qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.123067 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7330eb5-2c71-4ee9-b835-72cc930cecdd-inventory" (OuterVolumeSpecName: "inventory") pod "e7330eb5-2c71-4ee9-b835-72cc930cecdd" (UID: "e7330eb5-2c71-4ee9-b835-72cc930cecdd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.123143 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7330eb5-2c71-4ee9-b835-72cc930cecdd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e7330eb5-2c71-4ee9-b835-72cc930cecdd" (UID: "e7330eb5-2c71-4ee9-b835-72cc930cecdd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.194546 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r52qr\" (UniqueName: \"kubernetes.io/projected/e7330eb5-2c71-4ee9-b835-72cc930cecdd-kube-api-access-r52qr\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.194573 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7330eb5-2c71-4ee9-b835-72cc930cecdd-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.194583 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7330eb5-2c71-4ee9-b835-72cc930cecdd-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.602455 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" event={"ID":"e7330eb5-2c71-4ee9-b835-72cc930cecdd","Type":"ContainerDied","Data":"38ce09103522b2d7a9efdb4897573cbe44be576fb054435f08fe94e01a47abf7"} Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.602837 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38ce09103522b2d7a9efdb4897573cbe44be576fb054435f08fe94e01a47abf7" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.602513 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.672422 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm"] Nov 24 09:21:28 crc kubenswrapper[4719]: E1124 09:21:28.672781 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7330eb5-2c71-4ee9-b835-72cc930cecdd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.672819 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7330eb5-2c71-4ee9-b835-72cc930cecdd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.672992 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7330eb5-2c71-4ee9-b835-72cc930cecdd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.673643 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.676547 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.677400 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.677594 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.678206 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.690726 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm"] Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.702683 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ce53f85-5ce6-4f87-9212-49c23937f92c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bv8nm\" (UID: \"9ce53f85-5ce6-4f87-9212-49c23937f92c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.702837 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ce53f85-5ce6-4f87-9212-49c23937f92c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bv8nm\" (UID: \"9ce53f85-5ce6-4f87-9212-49c23937f92c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.702889 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msbjl\" (UniqueName: \"kubernetes.io/projected/9ce53f85-5ce6-4f87-9212-49c23937f92c-kube-api-access-msbjl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bv8nm\" (UID: \"9ce53f85-5ce6-4f87-9212-49c23937f92c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.804525 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msbjl\" (UniqueName: \"kubernetes.io/projected/9ce53f85-5ce6-4f87-9212-49c23937f92c-kube-api-access-msbjl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bv8nm\" (UID: \"9ce53f85-5ce6-4f87-9212-49c23937f92c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.804642 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ce53f85-5ce6-4f87-9212-49c23937f92c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bv8nm\" (UID: \"9ce53f85-5ce6-4f87-9212-49c23937f92c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.804781 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ce53f85-5ce6-4f87-9212-49c23937f92c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bv8nm\" (UID: \"9ce53f85-5ce6-4f87-9212-49c23937f92c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.811141 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ce53f85-5ce6-4f87-9212-49c23937f92c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bv8nm\" (UID: \"9ce53f85-5ce6-4f87-9212-49c23937f92c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.812006 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ce53f85-5ce6-4f87-9212-49c23937f92c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bv8nm\" (UID: \"9ce53f85-5ce6-4f87-9212-49c23937f92c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" Nov 24 09:21:28 crc kubenswrapper[4719]: I1124 09:21:28.825666 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msbjl\" (UniqueName: \"kubernetes.io/projected/9ce53f85-5ce6-4f87-9212-49c23937f92c-kube-api-access-msbjl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bv8nm\" (UID: \"9ce53f85-5ce6-4f87-9212-49c23937f92c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" Nov 24 09:21:29 crc kubenswrapper[4719]: I1124 09:21:29.040715 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" Nov 24 09:21:29 crc kubenswrapper[4719]: I1124 09:21:29.535346 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm"] Nov 24 09:21:29 crc kubenswrapper[4719]: I1124 09:21:29.613015 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" event={"ID":"9ce53f85-5ce6-4f87-9212-49c23937f92c","Type":"ContainerStarted","Data":"0d81c1a33c64db373618e84cb65157c439314862393c7099f4348ce30574ce0a"} Nov 24 09:21:30 crc kubenswrapper[4719]: I1124 09:21:30.621343 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" event={"ID":"9ce53f85-5ce6-4f87-9212-49c23937f92c","Type":"ContainerStarted","Data":"0d27cc908daee8716c49620ec3a9e45828f0ed247e1aed219ae9378e707906dd"} Nov 24 09:21:30 crc kubenswrapper[4719]: I1124 09:21:30.643966 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" podStartSLOduration=1.887267226 podStartE2EDuration="2.64394016s" podCreationTimestamp="2025-11-24 09:21:28 +0000 UTC" firstStartedPulling="2025-11-24 09:21:29.542983047 +0000 UTC m=+1665.874256299" lastFinishedPulling="2025-11-24 09:21:30.299655981 +0000 UTC m=+1666.630929233" observedRunningTime="2025-11-24 09:21:30.636719672 +0000 UTC m=+1666.967992934" watchObservedRunningTime="2025-11-24 09:21:30.64394016 +0000 UTC m=+1666.975213422" Nov 24 09:21:33 crc kubenswrapper[4719]: I1124 09:21:33.521491 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:21:33 crc kubenswrapper[4719]: E1124 09:21:33.522301 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:21:42 crc kubenswrapper[4719]: I1124 09:21:42.036872 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-k2l9n"] Nov 24 09:21:42 crc kubenswrapper[4719]: I1124 09:21:42.045065 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-k2l9n"] Nov 24 09:21:42 crc kubenswrapper[4719]: I1124 09:21:42.532589 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32da9e0b-97ee-48e0-bdd2-2c21bb019294" path="/var/lib/kubelet/pods/32da9e0b-97ee-48e0-bdd2-2c21bb019294/volumes" Nov 24 09:21:44 crc kubenswrapper[4719]: I1124 09:21:44.035533 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-ht2vd"] Nov 24 09:21:44 crc kubenswrapper[4719]: I1124 09:21:44.042797 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-ht2vd"] Nov 24 09:21:44 crc kubenswrapper[4719]: I1124 09:21:44.530989 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e1bf4ab-344c-4335-b16a-828d28141f11" path="/var/lib/kubelet/pods/2e1bf4ab-344c-4335-b16a-828d28141f11/volumes" Nov 24 09:21:46 crc kubenswrapper[4719]: I1124 09:21:46.028534 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-kggqc"] Nov 24 09:21:46 crc kubenswrapper[4719]: I1124 09:21:46.036340 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-kggqc"] Nov 24 09:21:46 crc kubenswrapper[4719]: I1124 09:21:46.530173 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84a9592e-0967-49ec-a421-66e027b6d56a" path="/var/lib/kubelet/pods/84a9592e-0967-49ec-a421-66e027b6d56a/volumes" Nov 24 09:21:47 crc kubenswrapper[4719]: I1124 09:21:47.520591 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:21:47 crc kubenswrapper[4719]: E1124 09:21:47.520945 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:21:48 crc kubenswrapper[4719]: I1124 09:21:48.026473 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jcgws"] Nov 24 09:21:48 crc kubenswrapper[4719]: I1124 09:21:48.034008 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jcgws"] Nov 24 09:21:48 crc kubenswrapper[4719]: I1124 09:21:48.530469 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddb9444b-a866-41c9-af6d-831061243d3c" path="/var/lib/kubelet/pods/ddb9444b-a866-41c9-af6d-831061243d3c/volumes" Nov 24 09:22:00 crc kubenswrapper[4719]: I1124 09:22:00.520968 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:22:00 crc kubenswrapper[4719]: E1124 09:22:00.521691 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:22:08 crc kubenswrapper[4719]: I1124 09:22:08.043160 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-8bn65"] Nov 24 09:22:08 crc kubenswrapper[4719]: I1124 09:22:08.051607 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-8bn65"] Nov 24 09:22:08 crc kubenswrapper[4719]: I1124 09:22:08.545320 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="902a4567-228a-43e0-b6c4-c323c4366c94" path="/var/lib/kubelet/pods/902a4567-228a-43e0-b6c4-c323c4366c94/volumes" Nov 24 09:22:09 crc kubenswrapper[4719]: I1124 09:22:09.521134 4719 scope.go:117] "RemoveContainer" containerID="fbe1391080b3ad2ec9d1385da1c125e20804d2e3a60b3e12d58aa350fc0bd326" Nov 24 09:22:09 crc kubenswrapper[4719]: I1124 09:22:09.588018 4719 scope.go:117] "RemoveContainer" containerID="b854ce9f7d89a39993476d675b4312e386b3801aef8b2c845902af90e55cdc18" Nov 24 09:22:09 crc kubenswrapper[4719]: I1124 09:22:09.619089 4719 scope.go:117] "RemoveContainer" containerID="00c542ad24716575f59444038f676feaa5fa431f3827a880e2d8df112f5fbfbf" Nov 24 09:22:09 crc kubenswrapper[4719]: I1124 09:22:09.675429 4719 scope.go:117] "RemoveContainer" containerID="1c2b454f96566e0f7f527de9b6ce08e339cbd2b34451cb98829c77dbc7327c82" Nov 24 09:22:09 crc kubenswrapper[4719]: I1124 09:22:09.723097 4719 scope.go:117] "RemoveContainer" containerID="d2d6692fa00534dc12ffb23def6ee8755851aa7601abdb202c2bf066688f9a82" Nov 24 09:22:09 crc kubenswrapper[4719]: I1124 09:22:09.765125 4719 scope.go:117] "RemoveContainer" containerID="249bd316aa3178b10dabe1da063dfc5c37b759599c82c1bcb717ec8164f6fa7b" Nov 24 09:22:12 crc kubenswrapper[4719]: I1124 09:22:12.991339 4719 generic.go:334] "Generic (PLEG): container finished" podID="9ce53f85-5ce6-4f87-9212-49c23937f92c" containerID="0d27cc908daee8716c49620ec3a9e45828f0ed247e1aed219ae9378e707906dd" exitCode=0 Nov 24 09:22:12 crc kubenswrapper[4719]: I1124 09:22:12.991433 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" event={"ID":"9ce53f85-5ce6-4f87-9212-49c23937f92c","Type":"ContainerDied","Data":"0d27cc908daee8716c49620ec3a9e45828f0ed247e1aed219ae9378e707906dd"} Nov 24 09:22:14 crc kubenswrapper[4719]: I1124 09:22:14.382213 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" Nov 24 09:22:14 crc kubenswrapper[4719]: I1124 09:22:14.527282 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:22:14 crc kubenswrapper[4719]: E1124 09:22:14.527520 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:22:14 crc kubenswrapper[4719]: I1124 09:22:14.549589 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ce53f85-5ce6-4f87-9212-49c23937f92c-inventory\") pod \"9ce53f85-5ce6-4f87-9212-49c23937f92c\" (UID: \"9ce53f85-5ce6-4f87-9212-49c23937f92c\") " Nov 24 09:22:14 crc kubenswrapper[4719]: I1124 09:22:14.549777 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msbjl\" (UniqueName: \"kubernetes.io/projected/9ce53f85-5ce6-4f87-9212-49c23937f92c-kube-api-access-msbjl\") pod \"9ce53f85-5ce6-4f87-9212-49c23937f92c\" (UID: \"9ce53f85-5ce6-4f87-9212-49c23937f92c\") " Nov 24 09:22:14 crc kubenswrapper[4719]: I1124 09:22:14.549822 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ce53f85-5ce6-4f87-9212-49c23937f92c-ssh-key\") pod \"9ce53f85-5ce6-4f87-9212-49c23937f92c\" (UID: \"9ce53f85-5ce6-4f87-9212-49c23937f92c\") " Nov 24 09:22:14 crc kubenswrapper[4719]: I1124 09:22:14.557165 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce53f85-5ce6-4f87-9212-49c23937f92c-kube-api-access-msbjl" (OuterVolumeSpecName: "kube-api-access-msbjl") pod "9ce53f85-5ce6-4f87-9212-49c23937f92c" (UID: "9ce53f85-5ce6-4f87-9212-49c23937f92c"). InnerVolumeSpecName "kube-api-access-msbjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:22:14 crc kubenswrapper[4719]: I1124 09:22:14.574614 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce53f85-5ce6-4f87-9212-49c23937f92c-inventory" (OuterVolumeSpecName: "inventory") pod "9ce53f85-5ce6-4f87-9212-49c23937f92c" (UID: "9ce53f85-5ce6-4f87-9212-49c23937f92c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:22:14 crc kubenswrapper[4719]: I1124 09:22:14.580252 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce53f85-5ce6-4f87-9212-49c23937f92c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9ce53f85-5ce6-4f87-9212-49c23937f92c" (UID: "9ce53f85-5ce6-4f87-9212-49c23937f92c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:22:14 crc kubenswrapper[4719]: I1124 09:22:14.652998 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msbjl\" (UniqueName: \"kubernetes.io/projected/9ce53f85-5ce6-4f87-9212-49c23937f92c-kube-api-access-msbjl\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:14 crc kubenswrapper[4719]: I1124 09:22:14.653050 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ce53f85-5ce6-4f87-9212-49c23937f92c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:14 crc kubenswrapper[4719]: I1124 09:22:14.653060 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ce53f85-5ce6-4f87-9212-49c23937f92c-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.008567 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" event={"ID":"9ce53f85-5ce6-4f87-9212-49c23937f92c","Type":"ContainerDied","Data":"0d81c1a33c64db373618e84cb65157c439314862393c7099f4348ce30574ce0a"} Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.008615 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d81c1a33c64db373618e84cb65157c439314862393c7099f4348ce30574ce0a" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.008614 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.085880 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt"] Nov 24 09:22:15 crc kubenswrapper[4719]: E1124 09:22:15.086359 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce53f85-5ce6-4f87-9212-49c23937f92c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.086379 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce53f85-5ce6-4f87-9212-49c23937f92c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.086622 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce53f85-5ce6-4f87-9212-49c23937f92c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.087395 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.089721 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.089817 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.090087 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.090131 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.105355 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt"] Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.163579 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4e61d99-60c6-4031-b2ec-69289a6e5d52-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt\" (UID: \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.163672 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzbv9\" (UniqueName: \"kubernetes.io/projected/d4e61d99-60c6-4031-b2ec-69289a6e5d52-kube-api-access-kzbv9\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt\" (UID: \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.163711 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4e61d99-60c6-4031-b2ec-69289a6e5d52-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt\" (UID: \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.266171 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4e61d99-60c6-4031-b2ec-69289a6e5d52-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt\" (UID: \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.266300 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzbv9\" (UniqueName: \"kubernetes.io/projected/d4e61d99-60c6-4031-b2ec-69289a6e5d52-kube-api-access-kzbv9\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt\" (UID: \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.266355 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4e61d99-60c6-4031-b2ec-69289a6e5d52-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt\" (UID: \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.270418 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4e61d99-60c6-4031-b2ec-69289a6e5d52-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt\" (UID: \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.272668 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4e61d99-60c6-4031-b2ec-69289a6e5d52-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt\" (UID: \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.285578 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzbv9\" (UniqueName: \"kubernetes.io/projected/d4e61d99-60c6-4031-b2ec-69289a6e5d52-kube-api-access-kzbv9\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt\" (UID: \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.403128 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" Nov 24 09:22:15 crc kubenswrapper[4719]: I1124 09:22:15.938761 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt"] Nov 24 09:22:16 crc kubenswrapper[4719]: I1124 09:22:16.017975 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" event={"ID":"d4e61d99-60c6-4031-b2ec-69289a6e5d52","Type":"ContainerStarted","Data":"5bc7338c813488bd6aee7bbf21eedabf810dbce6d25aabfd1c1062e6c72916a5"} Nov 24 09:22:17 crc kubenswrapper[4719]: I1124 09:22:17.026877 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" event={"ID":"d4e61d99-60c6-4031-b2ec-69289a6e5d52","Type":"ContainerStarted","Data":"360e09179a9ae78823ab350d8c411b432ca3238a792954574315d82e545b661e"} Nov 24 09:22:17 crc kubenswrapper[4719]: I1124 09:22:17.061663 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" podStartSLOduration=1.635730595 podStartE2EDuration="2.061643438s" podCreationTimestamp="2025-11-24 09:22:15 +0000 UTC" firstStartedPulling="2025-11-24 09:22:15.951594326 +0000 UTC m=+1712.282867578" lastFinishedPulling="2025-11-24 09:22:16.377507169 +0000 UTC m=+1712.708780421" observedRunningTime="2025-11-24 09:22:17.055840402 +0000 UTC m=+1713.387113674" watchObservedRunningTime="2025-11-24 09:22:17.061643438 +0000 UTC m=+1713.392916710" Nov 24 09:22:21 crc kubenswrapper[4719]: I1124 09:22:21.058737 4719 generic.go:334] "Generic (PLEG): container finished" podID="d4e61d99-60c6-4031-b2ec-69289a6e5d52" containerID="360e09179a9ae78823ab350d8c411b432ca3238a792954574315d82e545b661e" exitCode=0 Nov 24 09:22:21 crc kubenswrapper[4719]: I1124 09:22:21.058821 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" event={"ID":"d4e61d99-60c6-4031-b2ec-69289a6e5d52","Type":"ContainerDied","Data":"360e09179a9ae78823ab350d8c411b432ca3238a792954574315d82e545b661e"} Nov 24 09:22:22 crc kubenswrapper[4719]: I1124 09:22:22.426424 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" Nov 24 09:22:22 crc kubenswrapper[4719]: I1124 09:22:22.599300 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4e61d99-60c6-4031-b2ec-69289a6e5d52-ssh-key\") pod \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\" (UID: \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\") " Nov 24 09:22:22 crc kubenswrapper[4719]: I1124 09:22:22.599572 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4e61d99-60c6-4031-b2ec-69289a6e5d52-inventory\") pod \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\" (UID: \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\") " Nov 24 09:22:22 crc kubenswrapper[4719]: I1124 09:22:22.599707 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzbv9\" (UniqueName: \"kubernetes.io/projected/d4e61d99-60c6-4031-b2ec-69289a6e5d52-kube-api-access-kzbv9\") pod \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\" (UID: \"d4e61d99-60c6-4031-b2ec-69289a6e5d52\") " Nov 24 09:22:22 crc kubenswrapper[4719]: I1124 09:22:22.605282 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e61d99-60c6-4031-b2ec-69289a6e5d52-kube-api-access-kzbv9" (OuterVolumeSpecName: "kube-api-access-kzbv9") pod "d4e61d99-60c6-4031-b2ec-69289a6e5d52" (UID: "d4e61d99-60c6-4031-b2ec-69289a6e5d52"). InnerVolumeSpecName "kube-api-access-kzbv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:22:22 crc kubenswrapper[4719]: I1124 09:22:22.632611 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e61d99-60c6-4031-b2ec-69289a6e5d52-inventory" (OuterVolumeSpecName: "inventory") pod "d4e61d99-60c6-4031-b2ec-69289a6e5d52" (UID: "d4e61d99-60c6-4031-b2ec-69289a6e5d52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:22:22 crc kubenswrapper[4719]: I1124 09:22:22.633026 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e61d99-60c6-4031-b2ec-69289a6e5d52-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d4e61d99-60c6-4031-b2ec-69289a6e5d52" (UID: "d4e61d99-60c6-4031-b2ec-69289a6e5d52"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:22:22 crc kubenswrapper[4719]: I1124 09:22:22.702654 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4e61d99-60c6-4031-b2ec-69289a6e5d52-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:22 crc kubenswrapper[4719]: I1124 09:22:22.702690 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4e61d99-60c6-4031-b2ec-69289a6e5d52-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:22 crc kubenswrapper[4719]: I1124 09:22:22.702700 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzbv9\" (UniqueName: \"kubernetes.io/projected/d4e61d99-60c6-4031-b2ec-69289a6e5d52-kube-api-access-kzbv9\") on node \"crc\" DevicePath \"\"" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.079686 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" event={"ID":"d4e61d99-60c6-4031-b2ec-69289a6e5d52","Type":"ContainerDied","Data":"5bc7338c813488bd6aee7bbf21eedabf810dbce6d25aabfd1c1062e6c72916a5"} Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.079911 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bc7338c813488bd6aee7bbf21eedabf810dbce6d25aabfd1c1062e6c72916a5" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.079746 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.158129 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw"] Nov 24 09:22:23 crc kubenswrapper[4719]: E1124 09:22:23.158675 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e61d99-60c6-4031-b2ec-69289a6e5d52" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.158703 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e61d99-60c6-4031-b2ec-69289a6e5d52" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.159017 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e61d99-60c6-4031-b2ec-69289a6e5d52" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.159858 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.164056 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.164172 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.164614 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.164832 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.170521 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw"] Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.312363 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzkxk\" (UniqueName: \"kubernetes.io/projected/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-kube-api-access-qzkxk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw\" (UID: \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.312442 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw\" (UID: \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.312643 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw\" (UID: \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.414090 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw\" (UID: \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.414236 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw\" (UID: \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.414353 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzkxk\" (UniqueName: \"kubernetes.io/projected/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-kube-api-access-qzkxk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw\" (UID: \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.421121 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw\" (UID: \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.421558 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw\" (UID: \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.431884 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzkxk\" (UniqueName: \"kubernetes.io/projected/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-kube-api-access-qzkxk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw\" (UID: \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.500221 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" Nov 24 09:22:23 crc kubenswrapper[4719]: I1124 09:22:23.997763 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw"] Nov 24 09:22:24 crc kubenswrapper[4719]: I1124 09:22:24.089789 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" event={"ID":"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb","Type":"ContainerStarted","Data":"140f5b8c5f360d2e4fcd32c351dae5c5c91b9a31bd36a6ec451e102c8d41d25c"} Nov 24 09:22:25 crc kubenswrapper[4719]: I1124 09:22:25.099724 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" event={"ID":"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb","Type":"ContainerStarted","Data":"1c8a44e4051e38a8fbe6bd555013142154c498f23f9cbe1cbea604e08f72102b"} Nov 24 09:22:25 crc kubenswrapper[4719]: I1124 09:22:25.127401 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" podStartSLOduration=1.607488881 podStartE2EDuration="2.127366672s" podCreationTimestamp="2025-11-24 09:22:23 +0000 UTC" firstStartedPulling="2025-11-24 09:22:23.999676164 +0000 UTC m=+1720.330949416" lastFinishedPulling="2025-11-24 09:22:24.519553945 +0000 UTC m=+1720.850827207" observedRunningTime="2025-11-24 09:22:25.113605949 +0000 UTC m=+1721.444879221" watchObservedRunningTime="2025-11-24 09:22:25.127366672 +0000 UTC m=+1721.458640024" Nov 24 09:22:29 crc kubenswrapper[4719]: I1124 09:22:29.520588 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:22:29 crc kubenswrapper[4719]: E1124 09:22:29.521298 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:22:42 crc kubenswrapper[4719]: I1124 09:22:42.520806 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:22:42 crc kubenswrapper[4719]: E1124 09:22:42.522854 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:22:49 crc kubenswrapper[4719]: I1124 09:22:49.036938 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-qkjqj"] Nov 24 09:22:49 crc kubenswrapper[4719]: I1124 09:22:49.043358 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-qkjqj"] Nov 24 09:22:49 crc kubenswrapper[4719]: I1124 09:22:49.050490 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1017-account-create-vg8zv"] Nov 24 09:22:49 crc kubenswrapper[4719]: I1124 09:22:49.057178 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-s6jf5"] Nov 24 09:22:49 crc kubenswrapper[4719]: I1124 09:22:49.065395 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c34a-account-create-z5z27"] Nov 24 09:22:49 crc kubenswrapper[4719]: I1124 09:22:49.072940 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1017-account-create-vg8zv"] Nov 24 09:22:49 crc kubenswrapper[4719]: I1124 09:22:49.082663 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-e450-account-create-xv7lm"] Nov 24 09:22:49 crc kubenswrapper[4719]: I1124 09:22:49.089060 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-xhrh9"] Nov 24 09:22:49 crc kubenswrapper[4719]: I1124 09:22:49.096084 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-s6jf5"] Nov 24 09:22:49 crc kubenswrapper[4719]: I1124 09:22:49.102421 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c34a-account-create-z5z27"] Nov 24 09:22:49 crc kubenswrapper[4719]: I1124 09:22:49.108192 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-e450-account-create-xv7lm"] Nov 24 09:22:49 crc kubenswrapper[4719]: I1124 09:22:49.113476 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-xhrh9"] Nov 24 09:22:50 crc kubenswrapper[4719]: I1124 09:22:50.530921 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496251b9-2f65-457d-b68a-84d23bc3b05c" path="/var/lib/kubelet/pods/496251b9-2f65-457d-b68a-84d23bc3b05c/volumes" Nov 24 09:22:50 crc kubenswrapper[4719]: I1124 09:22:50.531648 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e484202-a53f-45ea-a78e-a596ab07ff66" path="/var/lib/kubelet/pods/9e484202-a53f-45ea-a78e-a596ab07ff66/volumes" Nov 24 09:22:50 crc kubenswrapper[4719]: I1124 09:22:50.532177 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a284fb9e-518e-4ae6-b20b-8016ed5eef59" path="/var/lib/kubelet/pods/a284fb9e-518e-4ae6-b20b-8016ed5eef59/volumes" Nov 24 09:22:50 crc kubenswrapper[4719]: I1124 09:22:50.532686 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad086a9d-061e-45a5-8364-758c44b03485" path="/var/lib/kubelet/pods/ad086a9d-061e-45a5-8364-758c44b03485/volumes" Nov 24 09:22:50 crc kubenswrapper[4719]: I1124 09:22:50.533758 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec8d7749-fbf6-4898-bebf-8df0fe88d0fa" path="/var/lib/kubelet/pods/ec8d7749-fbf6-4898-bebf-8df0fe88d0fa/volumes" Nov 24 09:22:50 crc kubenswrapper[4719]: I1124 09:22:50.534313 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f766839a-efee-4eb9-bfa5-ba2d5329af55" path="/var/lib/kubelet/pods/f766839a-efee-4eb9-bfa5-ba2d5329af55/volumes" Nov 24 09:22:56 crc kubenswrapper[4719]: I1124 09:22:56.520751 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:22:56 crc kubenswrapper[4719]: E1124 09:22:56.521473 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:23:09 crc kubenswrapper[4719]: I1124 09:23:09.906195 4719 scope.go:117] "RemoveContainer" containerID="a78e774427a1be728f20ae8faa783eabbcf28a272874cc10dd7bb0bd4f53f69a" Nov 24 09:23:09 crc kubenswrapper[4719]: I1124 09:23:09.946285 4719 scope.go:117] "RemoveContainer" containerID="bc75821ffacf307a4f0cc64398940237c2c3259437457f4bab23989221e1d80e" Nov 24 09:23:09 crc kubenswrapper[4719]: I1124 09:23:09.982588 4719 scope.go:117] "RemoveContainer" containerID="195a8c333514f59c03658a4f78a5231b7ec69e5dd788519b8aa6b679c0ee0ee1" Nov 24 09:23:10 crc kubenswrapper[4719]: I1124 09:23:10.023010 4719 scope.go:117] "RemoveContainer" containerID="b90cb70c9465eb8b791b70f06bbc6f52a19e1701400a951e5d67552f0e477d9b" Nov 24 09:23:10 crc kubenswrapper[4719]: I1124 09:23:10.064026 4719 scope.go:117] "RemoveContainer" containerID="5b497addf26c8af27f377a92489dab3d2d3ccd6764a24de70fc4ea6cc4a16257" Nov 24 09:23:10 crc kubenswrapper[4719]: I1124 09:23:10.103553 4719 scope.go:117] "RemoveContainer" containerID="b960c4b24d5aa8e2656489943ae260231f26a7a4e5b4ce3959ac8197f6bb4a05" Nov 24 09:23:10 crc kubenswrapper[4719]: I1124 09:23:10.520909 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:23:10 crc kubenswrapper[4719]: E1124 09:23:10.521255 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:23:19 crc kubenswrapper[4719]: I1124 09:23:19.050636 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xxwbr"] Nov 24 09:23:19 crc kubenswrapper[4719]: I1124 09:23:19.057714 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xxwbr"] Nov 24 09:23:20 crc kubenswrapper[4719]: I1124 09:23:20.536397 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30" path="/var/lib/kubelet/pods/d1629c5d-5eb0-4e8a-9f5a-68b0ab618f30/volumes" Nov 24 09:23:23 crc kubenswrapper[4719]: I1124 09:23:23.059330 4719 generic.go:334] "Generic (PLEG): container finished" podID="03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb" containerID="1c8a44e4051e38a8fbe6bd555013142154c498f23f9cbe1cbea604e08f72102b" exitCode=0 Nov 24 09:23:23 crc kubenswrapper[4719]: I1124 09:23:23.059424 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" event={"ID":"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb","Type":"ContainerDied","Data":"1c8a44e4051e38a8fbe6bd555013142154c498f23f9cbe1cbea604e08f72102b"} Nov 24 09:23:24 crc kubenswrapper[4719]: I1124 09:23:24.427371 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" Nov 24 09:23:24 crc kubenswrapper[4719]: I1124 09:23:24.526623 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:23:24 crc kubenswrapper[4719]: E1124 09:23:24.526934 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:23:24 crc kubenswrapper[4719]: I1124 09:23:24.527492 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-ssh-key\") pod \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\" (UID: \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\") " Nov 24 09:23:24 crc kubenswrapper[4719]: I1124 09:23:24.527577 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-inventory\") pod \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\" (UID: \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\") " Nov 24 09:23:24 crc kubenswrapper[4719]: I1124 09:23:24.527651 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzkxk\" (UniqueName: \"kubernetes.io/projected/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-kube-api-access-qzkxk\") pod \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\" (UID: \"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb\") " Nov 24 09:23:24 crc kubenswrapper[4719]: I1124 09:23:24.533382 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-kube-api-access-qzkxk" (OuterVolumeSpecName: "kube-api-access-qzkxk") pod "03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb" (UID: "03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb"). InnerVolumeSpecName "kube-api-access-qzkxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:23:24 crc kubenswrapper[4719]: I1124 09:23:24.552592 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-inventory" (OuterVolumeSpecName: "inventory") pod "03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb" (UID: "03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:23:24 crc kubenswrapper[4719]: I1124 09:23:24.561365 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb" (UID: "03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:23:24 crc kubenswrapper[4719]: I1124 09:23:24.629645 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzkxk\" (UniqueName: \"kubernetes.io/projected/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-kube-api-access-qzkxk\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:24 crc kubenswrapper[4719]: I1124 09:23:24.629673 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:24 crc kubenswrapper[4719]: I1124 09:23:24.629689 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.073301 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" event={"ID":"03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb","Type":"ContainerDied","Data":"140f5b8c5f360d2e4fcd32c351dae5c5c91b9a31bd36a6ec451e102c8d41d25c"} Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.073646 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="140f5b8c5f360d2e4fcd32c351dae5c5c91b9a31bd36a6ec451e102c8d41d25c" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.073340 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.190553 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lmt95"] Nov 24 09:23:25 crc kubenswrapper[4719]: E1124 09:23:25.190985 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.191006 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.191299 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.192005 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.198989 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.199002 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.202653 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.203163 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.226755 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lmt95"] Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.237663 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0f8d82a7-24db-4723-a4bb-33af2d084882-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lmt95\" (UID: \"0f8d82a7-24db-4723-a4bb-33af2d084882\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.237813 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88c8k\" (UniqueName: \"kubernetes.io/projected/0f8d82a7-24db-4723-a4bb-33af2d084882-kube-api-access-88c8k\") pod \"ssh-known-hosts-edpm-deployment-lmt95\" (UID: \"0f8d82a7-24db-4723-a4bb-33af2d084882\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.237897 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8d82a7-24db-4723-a4bb-33af2d084882-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lmt95\" (UID: \"0f8d82a7-24db-4723-a4bb-33af2d084882\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.338695 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88c8k\" (UniqueName: \"kubernetes.io/projected/0f8d82a7-24db-4723-a4bb-33af2d084882-kube-api-access-88c8k\") pod \"ssh-known-hosts-edpm-deployment-lmt95\" (UID: \"0f8d82a7-24db-4723-a4bb-33af2d084882\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.338766 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8d82a7-24db-4723-a4bb-33af2d084882-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lmt95\" (UID: \"0f8d82a7-24db-4723-a4bb-33af2d084882\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.338846 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0f8d82a7-24db-4723-a4bb-33af2d084882-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lmt95\" (UID: \"0f8d82a7-24db-4723-a4bb-33af2d084882\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.343314 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8d82a7-24db-4723-a4bb-33af2d084882-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-lmt95\" (UID: \"0f8d82a7-24db-4723-a4bb-33af2d084882\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.359531 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0f8d82a7-24db-4723-a4bb-33af2d084882-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-lmt95\" (UID: \"0f8d82a7-24db-4723-a4bb-33af2d084882\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.361103 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88c8k\" (UniqueName: \"kubernetes.io/projected/0f8d82a7-24db-4723-a4bb-33af2d084882-kube-api-access-88c8k\") pod \"ssh-known-hosts-edpm-deployment-lmt95\" (UID: \"0f8d82a7-24db-4723-a4bb-33af2d084882\") " pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" Nov 24 09:23:25 crc kubenswrapper[4719]: I1124 09:23:25.521628 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" Nov 24 09:23:26 crc kubenswrapper[4719]: I1124 09:23:26.038746 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lmt95"] Nov 24 09:23:26 crc kubenswrapper[4719]: I1124 09:23:26.081406 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" event={"ID":"0f8d82a7-24db-4723-a4bb-33af2d084882","Type":"ContainerStarted","Data":"ecfef18fd57b35563f7e048aa8a3e3835276e228546397183e1f3fb58d936124"} Nov 24 09:23:27 crc kubenswrapper[4719]: I1124 09:23:27.092486 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" event={"ID":"0f8d82a7-24db-4723-a4bb-33af2d084882","Type":"ContainerStarted","Data":"a77e5d6526f1d855a85a1f06662e98c310a8d1531425a629b642f242ba0e95aa"} Nov 24 09:23:28 crc kubenswrapper[4719]: I1124 09:23:28.139048 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" podStartSLOduration=2.483873716 podStartE2EDuration="3.139013547s" podCreationTimestamp="2025-11-24 09:23:25 +0000 UTC" firstStartedPulling="2025-11-24 09:23:26.053363206 +0000 UTC m=+1782.384636458" lastFinishedPulling="2025-11-24 09:23:26.708503037 +0000 UTC m=+1783.039776289" observedRunningTime="2025-11-24 09:23:28.134580171 +0000 UTC m=+1784.465853443" watchObservedRunningTime="2025-11-24 09:23:28.139013547 +0000 UTC m=+1784.470286799" Nov 24 09:23:35 crc kubenswrapper[4719]: I1124 09:23:35.176023 4719 generic.go:334] "Generic (PLEG): container finished" podID="0f8d82a7-24db-4723-a4bb-33af2d084882" containerID="a77e5d6526f1d855a85a1f06662e98c310a8d1531425a629b642f242ba0e95aa" exitCode=0 Nov 24 09:23:35 crc kubenswrapper[4719]: I1124 09:23:35.176191 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" event={"ID":"0f8d82a7-24db-4723-a4bb-33af2d084882","Type":"ContainerDied","Data":"a77e5d6526f1d855a85a1f06662e98c310a8d1531425a629b642f242ba0e95aa"} Nov 24 09:23:36 crc kubenswrapper[4719]: I1124 09:23:36.566073 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" Nov 24 09:23:36 crc kubenswrapper[4719]: I1124 09:23:36.654079 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8d82a7-24db-4723-a4bb-33af2d084882-ssh-key-openstack-edpm-ipam\") pod \"0f8d82a7-24db-4723-a4bb-33af2d084882\" (UID: \"0f8d82a7-24db-4723-a4bb-33af2d084882\") " Nov 24 09:23:36 crc kubenswrapper[4719]: I1124 09:23:36.654303 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0f8d82a7-24db-4723-a4bb-33af2d084882-inventory-0\") pod \"0f8d82a7-24db-4723-a4bb-33af2d084882\" (UID: \"0f8d82a7-24db-4723-a4bb-33af2d084882\") " Nov 24 09:23:36 crc kubenswrapper[4719]: I1124 09:23:36.654346 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88c8k\" (UniqueName: \"kubernetes.io/projected/0f8d82a7-24db-4723-a4bb-33af2d084882-kube-api-access-88c8k\") pod \"0f8d82a7-24db-4723-a4bb-33af2d084882\" (UID: \"0f8d82a7-24db-4723-a4bb-33af2d084882\") " Nov 24 09:23:36 crc kubenswrapper[4719]: I1124 09:23:36.659440 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f8d82a7-24db-4723-a4bb-33af2d084882-kube-api-access-88c8k" (OuterVolumeSpecName: "kube-api-access-88c8k") pod "0f8d82a7-24db-4723-a4bb-33af2d084882" (UID: "0f8d82a7-24db-4723-a4bb-33af2d084882"). InnerVolumeSpecName "kube-api-access-88c8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:23:36 crc kubenswrapper[4719]: I1124 09:23:36.682332 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8d82a7-24db-4723-a4bb-33af2d084882-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0f8d82a7-24db-4723-a4bb-33af2d084882" (UID: "0f8d82a7-24db-4723-a4bb-33af2d084882"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:23:36 crc kubenswrapper[4719]: I1124 09:23:36.682871 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8d82a7-24db-4723-a4bb-33af2d084882-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "0f8d82a7-24db-4723-a4bb-33af2d084882" (UID: "0f8d82a7-24db-4723-a4bb-33af2d084882"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:23:36 crc kubenswrapper[4719]: I1124 09:23:36.755773 4719 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0f8d82a7-24db-4723-a4bb-33af2d084882-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:36 crc kubenswrapper[4719]: I1124 09:23:36.755801 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88c8k\" (UniqueName: \"kubernetes.io/projected/0f8d82a7-24db-4723-a4bb-33af2d084882-kube-api-access-88c8k\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:36 crc kubenswrapper[4719]: I1124 09:23:36.755811 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f8d82a7-24db-4723-a4bb-33af2d084882-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.199951 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" event={"ID":"0f8d82a7-24db-4723-a4bb-33af2d084882","Type":"ContainerDied","Data":"ecfef18fd57b35563f7e048aa8a3e3835276e228546397183e1f3fb58d936124"} Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.200002 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecfef18fd57b35563f7e048aa8a3e3835276e228546397183e1f3fb58d936124" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.200094 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-lmt95" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.314077 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w"] Nov 24 09:23:37 crc kubenswrapper[4719]: E1124 09:23:37.314515 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8d82a7-24db-4723-a4bb-33af2d084882" containerName="ssh-known-hosts-edpm-deployment" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.314538 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8d82a7-24db-4723-a4bb-33af2d084882" containerName="ssh-known-hosts-edpm-deployment" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.316764 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8d82a7-24db-4723-a4bb-33af2d084882" containerName="ssh-known-hosts-edpm-deployment" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.317530 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.320258 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.320416 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.320423 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.323274 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.331200 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w"] Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.468380 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dfc89e12-66af-45e9-8f36-bb46e97f0845-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dph8w\" (UID: \"dfc89e12-66af-45e9-8f36-bb46e97f0845\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.468486 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mf8b\" (UniqueName: \"kubernetes.io/projected/dfc89e12-66af-45e9-8f36-bb46e97f0845-kube-api-access-8mf8b\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dph8w\" (UID: \"dfc89e12-66af-45e9-8f36-bb46e97f0845\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.468551 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc89e12-66af-45e9-8f36-bb46e97f0845-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dph8w\" (UID: \"dfc89e12-66af-45e9-8f36-bb46e97f0845\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.522873 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:23:37 crc kubenswrapper[4719]: E1124 09:23:37.524429 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.569596 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc89e12-66af-45e9-8f36-bb46e97f0845-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dph8w\" (UID: \"dfc89e12-66af-45e9-8f36-bb46e97f0845\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.569721 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dfc89e12-66af-45e9-8f36-bb46e97f0845-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dph8w\" (UID: \"dfc89e12-66af-45e9-8f36-bb46e97f0845\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.569815 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mf8b\" (UniqueName: \"kubernetes.io/projected/dfc89e12-66af-45e9-8f36-bb46e97f0845-kube-api-access-8mf8b\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dph8w\" (UID: \"dfc89e12-66af-45e9-8f36-bb46e97f0845\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.575804 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc89e12-66af-45e9-8f36-bb46e97f0845-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dph8w\" (UID: \"dfc89e12-66af-45e9-8f36-bb46e97f0845\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.577766 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dfc89e12-66af-45e9-8f36-bb46e97f0845-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dph8w\" (UID: \"dfc89e12-66af-45e9-8f36-bb46e97f0845\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.590619 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mf8b\" (UniqueName: \"kubernetes.io/projected/dfc89e12-66af-45e9-8f36-bb46e97f0845-kube-api-access-8mf8b\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dph8w\" (UID: \"dfc89e12-66af-45e9-8f36-bb46e97f0845\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" Nov 24 09:23:37 crc kubenswrapper[4719]: I1124 09:23:37.676467 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" Nov 24 09:23:38 crc kubenswrapper[4719]: I1124 09:23:38.179579 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w"] Nov 24 09:23:38 crc kubenswrapper[4719]: I1124 09:23:38.215710 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" event={"ID":"dfc89e12-66af-45e9-8f36-bb46e97f0845","Type":"ContainerStarted","Data":"ae9906a348c00f194d3d68498e673aa4b76f4a204aa2becb04d001e5e666eb87"} Nov 24 09:23:40 crc kubenswrapper[4719]: I1124 09:23:40.237050 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" event={"ID":"dfc89e12-66af-45e9-8f36-bb46e97f0845","Type":"ContainerStarted","Data":"d47c12f105bfae403dfd822911469c934da096619c8429cc8a050e8eb3f5ac00"} Nov 24 09:23:46 crc kubenswrapper[4719]: I1124 09:23:46.036952 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" podStartSLOduration=7.885859728 podStartE2EDuration="9.036932374s" podCreationTimestamp="2025-11-24 09:23:37 +0000 UTC" firstStartedPulling="2025-11-24 09:23:38.201098597 +0000 UTC m=+1794.532371869" lastFinishedPulling="2025-11-24 09:23:39.352171263 +0000 UTC m=+1795.683444515" observedRunningTime="2025-11-24 09:23:40.257654615 +0000 UTC m=+1796.588927897" watchObservedRunningTime="2025-11-24 09:23:46.036932374 +0000 UTC m=+1802.368205636" Nov 24 09:23:46 crc kubenswrapper[4719]: I1124 09:23:46.039937 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4vlpd"] Nov 24 09:23:46 crc kubenswrapper[4719]: I1124 09:23:46.045939 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4vlpd"] Nov 24 09:23:46 crc kubenswrapper[4719]: I1124 09:23:46.531833 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81328d33-7af2-4d4b-9f81-033c996a7d36" path="/var/lib/kubelet/pods/81328d33-7af2-4d4b-9f81-033c996a7d36/volumes" Nov 24 09:23:48 crc kubenswrapper[4719]: I1124 09:23:48.045522 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x8mh9"] Nov 24 09:23:48 crc kubenswrapper[4719]: I1124 09:23:48.052723 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-x8mh9"] Nov 24 09:23:48 crc kubenswrapper[4719]: I1124 09:23:48.307114 4719 generic.go:334] "Generic (PLEG): container finished" podID="dfc89e12-66af-45e9-8f36-bb46e97f0845" containerID="d47c12f105bfae403dfd822911469c934da096619c8429cc8a050e8eb3f5ac00" exitCode=0 Nov 24 09:23:48 crc kubenswrapper[4719]: I1124 09:23:48.307171 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" event={"ID":"dfc89e12-66af-45e9-8f36-bb46e97f0845","Type":"ContainerDied","Data":"d47c12f105bfae403dfd822911469c934da096619c8429cc8a050e8eb3f5ac00"} Nov 24 09:23:48 crc kubenswrapper[4719]: I1124 09:23:48.536246 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28468fda-a274-493f-8a27-3aa221c5c8db" path="/var/lib/kubelet/pods/28468fda-a274-493f-8a27-3aa221c5c8db/volumes" Nov 24 09:23:49 crc kubenswrapper[4719]: I1124 09:23:49.770854 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" Nov 24 09:23:49 crc kubenswrapper[4719]: I1124 09:23:49.900133 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc89e12-66af-45e9-8f36-bb46e97f0845-inventory\") pod \"dfc89e12-66af-45e9-8f36-bb46e97f0845\" (UID: \"dfc89e12-66af-45e9-8f36-bb46e97f0845\") " Nov 24 09:23:49 crc kubenswrapper[4719]: I1124 09:23:49.900212 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mf8b\" (UniqueName: \"kubernetes.io/projected/dfc89e12-66af-45e9-8f36-bb46e97f0845-kube-api-access-8mf8b\") pod \"dfc89e12-66af-45e9-8f36-bb46e97f0845\" (UID: \"dfc89e12-66af-45e9-8f36-bb46e97f0845\") " Nov 24 09:23:49 crc kubenswrapper[4719]: I1124 09:23:49.900342 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dfc89e12-66af-45e9-8f36-bb46e97f0845-ssh-key\") pod \"dfc89e12-66af-45e9-8f36-bb46e97f0845\" (UID: \"dfc89e12-66af-45e9-8f36-bb46e97f0845\") " Nov 24 09:23:49 crc kubenswrapper[4719]: I1124 09:23:49.909419 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc89e12-66af-45e9-8f36-bb46e97f0845-kube-api-access-8mf8b" (OuterVolumeSpecName: "kube-api-access-8mf8b") pod "dfc89e12-66af-45e9-8f36-bb46e97f0845" (UID: "dfc89e12-66af-45e9-8f36-bb46e97f0845"). InnerVolumeSpecName "kube-api-access-8mf8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:23:49 crc kubenswrapper[4719]: I1124 09:23:49.926716 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc89e12-66af-45e9-8f36-bb46e97f0845-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dfc89e12-66af-45e9-8f36-bb46e97f0845" (UID: "dfc89e12-66af-45e9-8f36-bb46e97f0845"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:23:49 crc kubenswrapper[4719]: I1124 09:23:49.934216 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc89e12-66af-45e9-8f36-bb46e97f0845-inventory" (OuterVolumeSpecName: "inventory") pod "dfc89e12-66af-45e9-8f36-bb46e97f0845" (UID: "dfc89e12-66af-45e9-8f36-bb46e97f0845"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.002610 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dfc89e12-66af-45e9-8f36-bb46e97f0845-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.002659 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfc89e12-66af-45e9-8f36-bb46e97f0845-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.002670 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mf8b\" (UniqueName: \"kubernetes.io/projected/dfc89e12-66af-45e9-8f36-bb46e97f0845-kube-api-access-8mf8b\") on node \"crc\" DevicePath \"\"" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.330693 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" event={"ID":"dfc89e12-66af-45e9-8f36-bb46e97f0845","Type":"ContainerDied","Data":"ae9906a348c00f194d3d68498e673aa4b76f4a204aa2becb04d001e5e666eb87"} Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.331110 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae9906a348c00f194d3d68498e673aa4b76f4a204aa2becb04d001e5e666eb87" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.330923 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.427895 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc"] Nov 24 09:23:50 crc kubenswrapper[4719]: E1124 09:23:50.428374 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc89e12-66af-45e9-8f36-bb46e97f0845" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.428410 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc89e12-66af-45e9-8f36-bb46e97f0845" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.428569 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc89e12-66af-45e9-8f36-bb46e97f0845" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.429182 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.431278 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.434184 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.434476 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.434649 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.478079 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc"] Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.513886 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kwrx\" (UniqueName: \"kubernetes.io/projected/51cbaba0-47cd-49ce-9551-8e4e440b7505-kube-api-access-9kwrx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc\" (UID: \"51cbaba0-47cd-49ce-9551-8e4e440b7505\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.513986 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51cbaba0-47cd-49ce-9551-8e4e440b7505-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc\" (UID: \"51cbaba0-47cd-49ce-9551-8e4e440b7505\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.514134 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51cbaba0-47cd-49ce-9551-8e4e440b7505-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc\" (UID: \"51cbaba0-47cd-49ce-9551-8e4e440b7505\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.616235 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kwrx\" (UniqueName: \"kubernetes.io/projected/51cbaba0-47cd-49ce-9551-8e4e440b7505-kube-api-access-9kwrx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc\" (UID: \"51cbaba0-47cd-49ce-9551-8e4e440b7505\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.616403 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51cbaba0-47cd-49ce-9551-8e4e440b7505-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc\" (UID: \"51cbaba0-47cd-49ce-9551-8e4e440b7505\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.616456 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51cbaba0-47cd-49ce-9551-8e4e440b7505-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc\" (UID: \"51cbaba0-47cd-49ce-9551-8e4e440b7505\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.620835 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51cbaba0-47cd-49ce-9551-8e4e440b7505-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc\" (UID: \"51cbaba0-47cd-49ce-9551-8e4e440b7505\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.623599 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51cbaba0-47cd-49ce-9551-8e4e440b7505-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc\" (UID: \"51cbaba0-47cd-49ce-9551-8e4e440b7505\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.640388 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kwrx\" (UniqueName: \"kubernetes.io/projected/51cbaba0-47cd-49ce-9551-8e4e440b7505-kube-api-access-9kwrx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc\" (UID: \"51cbaba0-47cd-49ce-9551-8e4e440b7505\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" Nov 24 09:23:50 crc kubenswrapper[4719]: I1124 09:23:50.761233 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" Nov 24 09:23:51 crc kubenswrapper[4719]: I1124 09:23:51.275639 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc"] Nov 24 09:23:51 crc kubenswrapper[4719]: I1124 09:23:51.338669 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" event={"ID":"51cbaba0-47cd-49ce-9551-8e4e440b7505","Type":"ContainerStarted","Data":"ecbee4b4f126f6d8516aa4c22a9e552d0e37e13d17b054c66375d2bbb5a11543"} Nov 24 09:23:52 crc kubenswrapper[4719]: I1124 09:23:52.352606 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" event={"ID":"51cbaba0-47cd-49ce-9551-8e4e440b7505","Type":"ContainerStarted","Data":"0e159574a8f6d1aa52fbdcd18914875b209f808d7cd57ead49adc810d278920f"} Nov 24 09:23:52 crc kubenswrapper[4719]: I1124 09:23:52.382667 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" podStartSLOduration=1.960094168 podStartE2EDuration="2.382645206s" podCreationTimestamp="2025-11-24 09:23:50 +0000 UTC" firstStartedPulling="2025-11-24 09:23:51.281690612 +0000 UTC m=+1807.612963864" lastFinishedPulling="2025-11-24 09:23:51.70424165 +0000 UTC m=+1808.035514902" observedRunningTime="2025-11-24 09:23:52.37053898 +0000 UTC m=+1808.701812262" watchObservedRunningTime="2025-11-24 09:23:52.382645206 +0000 UTC m=+1808.713918468" Nov 24 09:23:52 crc kubenswrapper[4719]: I1124 09:23:52.521974 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:23:52 crc kubenswrapper[4719]: E1124 09:23:52.522562 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:24:02 crc kubenswrapper[4719]: I1124 09:24:02.561360 4719 generic.go:334] "Generic (PLEG): container finished" podID="51cbaba0-47cd-49ce-9551-8e4e440b7505" containerID="0e159574a8f6d1aa52fbdcd18914875b209f808d7cd57ead49adc810d278920f" exitCode=0 Nov 24 09:24:02 crc kubenswrapper[4719]: I1124 09:24:02.561487 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" event={"ID":"51cbaba0-47cd-49ce-9551-8e4e440b7505","Type":"ContainerDied","Data":"0e159574a8f6d1aa52fbdcd18914875b209f808d7cd57ead49adc810d278920f"} Nov 24 09:24:03 crc kubenswrapper[4719]: I1124 09:24:03.941830 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" Nov 24 09:24:04 crc kubenswrapper[4719]: I1124 09:24:04.115700 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kwrx\" (UniqueName: \"kubernetes.io/projected/51cbaba0-47cd-49ce-9551-8e4e440b7505-kube-api-access-9kwrx\") pod \"51cbaba0-47cd-49ce-9551-8e4e440b7505\" (UID: \"51cbaba0-47cd-49ce-9551-8e4e440b7505\") " Nov 24 09:24:04 crc kubenswrapper[4719]: I1124 09:24:04.117136 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51cbaba0-47cd-49ce-9551-8e4e440b7505-ssh-key\") pod \"51cbaba0-47cd-49ce-9551-8e4e440b7505\" (UID: \"51cbaba0-47cd-49ce-9551-8e4e440b7505\") " Nov 24 09:24:04 crc kubenswrapper[4719]: I1124 09:24:04.117598 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51cbaba0-47cd-49ce-9551-8e4e440b7505-inventory\") pod \"51cbaba0-47cd-49ce-9551-8e4e440b7505\" (UID: \"51cbaba0-47cd-49ce-9551-8e4e440b7505\") " Nov 24 09:24:04 crc kubenswrapper[4719]: I1124 09:24:04.123840 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51cbaba0-47cd-49ce-9551-8e4e440b7505-kube-api-access-9kwrx" (OuterVolumeSpecName: "kube-api-access-9kwrx") pod "51cbaba0-47cd-49ce-9551-8e4e440b7505" (UID: "51cbaba0-47cd-49ce-9551-8e4e440b7505"). InnerVolumeSpecName "kube-api-access-9kwrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:24:04 crc kubenswrapper[4719]: I1124 09:24:04.145148 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51cbaba0-47cd-49ce-9551-8e4e440b7505-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "51cbaba0-47cd-49ce-9551-8e4e440b7505" (UID: "51cbaba0-47cd-49ce-9551-8e4e440b7505"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:24:04 crc kubenswrapper[4719]: I1124 09:24:04.146419 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51cbaba0-47cd-49ce-9551-8e4e440b7505-inventory" (OuterVolumeSpecName: "inventory") pod "51cbaba0-47cd-49ce-9551-8e4e440b7505" (UID: "51cbaba0-47cd-49ce-9551-8e4e440b7505"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:24:04 crc kubenswrapper[4719]: I1124 09:24:04.220402 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51cbaba0-47cd-49ce-9551-8e4e440b7505-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:04 crc kubenswrapper[4719]: I1124 09:24:04.220437 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kwrx\" (UniqueName: \"kubernetes.io/projected/51cbaba0-47cd-49ce-9551-8e4e440b7505-kube-api-access-9kwrx\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:04 crc kubenswrapper[4719]: I1124 09:24:04.220449 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/51cbaba0-47cd-49ce-9551-8e4e440b7505-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:24:04 crc kubenswrapper[4719]: I1124 09:24:04.578496 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" event={"ID":"51cbaba0-47cd-49ce-9551-8e4e440b7505","Type":"ContainerDied","Data":"ecbee4b4f126f6d8516aa4c22a9e552d0e37e13d17b054c66375d2bbb5a11543"} Nov 24 09:24:04 crc kubenswrapper[4719]: I1124 09:24:04.578708 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecbee4b4f126f6d8516aa4c22a9e552d0e37e13d17b054c66375d2bbb5a11543" Nov 24 09:24:04 crc kubenswrapper[4719]: I1124 09:24:04.578552 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc" Nov 24 09:24:04 crc kubenswrapper[4719]: E1124 09:24:04.699178 4719 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51cbaba0_47cd_49ce_9551_8e4e440b7505.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51cbaba0_47cd_49ce_9551_8e4e440b7505.slice/crio-ecbee4b4f126f6d8516aa4c22a9e552d0e37e13d17b054c66375d2bbb5a11543\": RecentStats: unable to find data in memory cache]" Nov 24 09:24:06 crc kubenswrapper[4719]: I1124 09:24:06.521130 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:24:06 crc kubenswrapper[4719]: E1124 09:24:06.522094 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:24:10 crc kubenswrapper[4719]: I1124 09:24:10.271289 4719 scope.go:117] "RemoveContainer" containerID="c22ed4ad0ba88baa4c6dd82e8fb8c82fda65ce23848f984ff2c624d6ec0cf5d5" Nov 24 09:24:10 crc kubenswrapper[4719]: I1124 09:24:10.502572 4719 scope.go:117] "RemoveContainer" containerID="eb3c20c894ab71f62034c6abc2ff661dfc401547e52546f4f66de536b992f090" Nov 24 09:24:10 crc kubenswrapper[4719]: I1124 09:24:10.549483 4719 scope.go:117] "RemoveContainer" containerID="aa985b55b297acae8a118bf6107d9e386b0a250b74e57b331d34f6d884080499" Nov 24 09:24:21 crc kubenswrapper[4719]: I1124 09:24:21.521737 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:24:21 crc kubenswrapper[4719]: E1124 09:24:21.522369 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:24:29 crc kubenswrapper[4719]: I1124 09:24:29.046339 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-hsp2d"] Nov 24 09:24:29 crc kubenswrapper[4719]: I1124 09:24:29.058553 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-hsp2d"] Nov 24 09:24:30 crc kubenswrapper[4719]: I1124 09:24:30.529914 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e" path="/var/lib/kubelet/pods/996be96f-8fa7-4c70-9f2c-ed1b87b4ad4e/volumes" Nov 24 09:24:36 crc kubenswrapper[4719]: I1124 09:24:36.525193 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:24:36 crc kubenswrapper[4719]: E1124 09:24:36.526611 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:24:51 crc kubenswrapper[4719]: I1124 09:24:51.521870 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:24:51 crc kubenswrapper[4719]: E1124 09:24:51.522743 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:25:06 crc kubenswrapper[4719]: I1124 09:25:06.521243 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:25:06 crc kubenswrapper[4719]: E1124 09:25:06.521864 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:25:10 crc kubenswrapper[4719]: I1124 09:25:10.696082 4719 scope.go:117] "RemoveContainer" containerID="cbb213cd89c4180e8c8588226c99002e690f2edf775ee64ddc4e71361d03a5b8" Nov 24 09:25:20 crc kubenswrapper[4719]: I1124 09:25:20.521118 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:25:20 crc kubenswrapper[4719]: E1124 09:25:20.521920 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:25:31 crc kubenswrapper[4719]: I1124 09:25:31.521121 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:25:31 crc kubenswrapper[4719]: E1124 09:25:31.521858 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:25:46 crc kubenswrapper[4719]: I1124 09:25:46.521643 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:25:47 crc kubenswrapper[4719]: I1124 09:25:47.521412 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"243cff5b77320b462dfde0084994a0c4bd7eb54c42623909c12e57e5ffc63d4d"} Nov 24 09:27:04 crc kubenswrapper[4719]: I1124 09:27:04.986765 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-69mrh"] Nov 24 09:27:04 crc kubenswrapper[4719]: E1124 09:27:04.989345 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51cbaba0-47cd-49ce-9551-8e4e440b7505" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:27:04 crc kubenswrapper[4719]: I1124 09:27:04.989365 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="51cbaba0-47cd-49ce-9551-8e4e440b7505" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:27:04 crc kubenswrapper[4719]: I1124 09:27:04.989562 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="51cbaba0-47cd-49ce-9551-8e4e440b7505" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:27:04 crc kubenswrapper[4719]: I1124 09:27:04.991200 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:05 crc kubenswrapper[4719]: I1124 09:27:05.004785 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-69mrh"] Nov 24 09:27:05 crc kubenswrapper[4719]: I1124 09:27:05.143564 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd607312-6e69-4e00-a333-4abe8c7c937a-utilities\") pod \"redhat-operators-69mrh\" (UID: \"cd607312-6e69-4e00-a333-4abe8c7c937a\") " pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:05 crc kubenswrapper[4719]: I1124 09:27:05.143968 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd607312-6e69-4e00-a333-4abe8c7c937a-catalog-content\") pod \"redhat-operators-69mrh\" (UID: \"cd607312-6e69-4e00-a333-4abe8c7c937a\") " pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:05 crc kubenswrapper[4719]: I1124 09:27:05.144235 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbs5m\" (UniqueName: \"kubernetes.io/projected/cd607312-6e69-4e00-a333-4abe8c7c937a-kube-api-access-qbs5m\") pod \"redhat-operators-69mrh\" (UID: \"cd607312-6e69-4e00-a333-4abe8c7c937a\") " pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:05 crc kubenswrapper[4719]: I1124 09:27:05.246548 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd607312-6e69-4e00-a333-4abe8c7c937a-utilities\") pod \"redhat-operators-69mrh\" (UID: \"cd607312-6e69-4e00-a333-4abe8c7c937a\") " pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:05 crc kubenswrapper[4719]: I1124 09:27:05.246650 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd607312-6e69-4e00-a333-4abe8c7c937a-catalog-content\") pod \"redhat-operators-69mrh\" (UID: \"cd607312-6e69-4e00-a333-4abe8c7c937a\") " pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:05 crc kubenswrapper[4719]: I1124 09:27:05.246748 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbs5m\" (UniqueName: \"kubernetes.io/projected/cd607312-6e69-4e00-a333-4abe8c7c937a-kube-api-access-qbs5m\") pod \"redhat-operators-69mrh\" (UID: \"cd607312-6e69-4e00-a333-4abe8c7c937a\") " pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:05 crc kubenswrapper[4719]: I1124 09:27:05.247556 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd607312-6e69-4e00-a333-4abe8c7c937a-utilities\") pod \"redhat-operators-69mrh\" (UID: \"cd607312-6e69-4e00-a333-4abe8c7c937a\") " pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:05 crc kubenswrapper[4719]: I1124 09:27:05.247603 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd607312-6e69-4e00-a333-4abe8c7c937a-catalog-content\") pod \"redhat-operators-69mrh\" (UID: \"cd607312-6e69-4e00-a333-4abe8c7c937a\") " pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:05 crc kubenswrapper[4719]: I1124 09:27:05.275352 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbs5m\" (UniqueName: \"kubernetes.io/projected/cd607312-6e69-4e00-a333-4abe8c7c937a-kube-api-access-qbs5m\") pod \"redhat-operators-69mrh\" (UID: \"cd607312-6e69-4e00-a333-4abe8c7c937a\") " pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:05 crc kubenswrapper[4719]: I1124 09:27:05.319091 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:05 crc kubenswrapper[4719]: I1124 09:27:05.853343 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-69mrh"] Nov 24 09:27:06 crc kubenswrapper[4719]: I1124 09:27:06.174845 4719 generic.go:334] "Generic (PLEG): container finished" podID="cd607312-6e69-4e00-a333-4abe8c7c937a" containerID="666d44b5bfa51ef03b1155cbcc05883fb7f1d20b7d014bc28d0517fc873e2878" exitCode=0 Nov 24 09:27:06 crc kubenswrapper[4719]: I1124 09:27:06.174899 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69mrh" event={"ID":"cd607312-6e69-4e00-a333-4abe8c7c937a","Type":"ContainerDied","Data":"666d44b5bfa51ef03b1155cbcc05883fb7f1d20b7d014bc28d0517fc873e2878"} Nov 24 09:27:06 crc kubenswrapper[4719]: I1124 09:27:06.174934 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69mrh" event={"ID":"cd607312-6e69-4e00-a333-4abe8c7c937a","Type":"ContainerStarted","Data":"7abd0afeb8c361851cfd691da334281ef9ceda334c505ef4014b4432e8bf9d97"} Nov 24 09:27:06 crc kubenswrapper[4719]: I1124 09:27:06.176426 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:27:09 crc kubenswrapper[4719]: I1124 09:27:09.198921 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69mrh" event={"ID":"cd607312-6e69-4e00-a333-4abe8c7c937a","Type":"ContainerStarted","Data":"8ef811c4d0e09968185df214084de7d406316f28f48b755128c9a272740dd082"} Nov 24 09:27:13 crc kubenswrapper[4719]: I1124 09:27:13.236182 4719 generic.go:334] "Generic (PLEG): container finished" podID="cd607312-6e69-4e00-a333-4abe8c7c937a" containerID="8ef811c4d0e09968185df214084de7d406316f28f48b755128c9a272740dd082" exitCode=0 Nov 24 09:27:13 crc kubenswrapper[4719]: I1124 09:27:13.236241 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69mrh" event={"ID":"cd607312-6e69-4e00-a333-4abe8c7c937a","Type":"ContainerDied","Data":"8ef811c4d0e09968185df214084de7d406316f28f48b755128c9a272740dd082"} Nov 24 09:27:14 crc kubenswrapper[4719]: I1124 09:27:14.247358 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69mrh" event={"ID":"cd607312-6e69-4e00-a333-4abe8c7c937a","Type":"ContainerStarted","Data":"dd3b3297f739ae9351ae69d6d2dd449209474d2ff8d18ce0c200c5c6808db24e"} Nov 24 09:27:15 crc kubenswrapper[4719]: I1124 09:27:15.319474 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:15 crc kubenswrapper[4719]: I1124 09:27:15.319818 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:16 crc kubenswrapper[4719]: I1124 09:27:16.371403 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-69mrh" podUID="cd607312-6e69-4e00-a333-4abe8c7c937a" containerName="registry-server" probeResult="failure" output=< Nov 24 09:27:16 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:27:16 crc kubenswrapper[4719]: > Nov 24 09:27:25 crc kubenswrapper[4719]: I1124 09:27:25.368151 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:25 crc kubenswrapper[4719]: I1124 09:27:25.397000 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-69mrh" podStartSLOduration=13.906708493 podStartE2EDuration="21.396983107s" podCreationTimestamp="2025-11-24 09:27:04 +0000 UTC" firstStartedPulling="2025-11-24 09:27:06.176207753 +0000 UTC m=+2002.507481005" lastFinishedPulling="2025-11-24 09:27:13.666482367 +0000 UTC m=+2009.997755619" observedRunningTime="2025-11-24 09:27:14.274502593 +0000 UTC m=+2010.605775865" watchObservedRunningTime="2025-11-24 09:27:25.396983107 +0000 UTC m=+2021.728256359" Nov 24 09:27:25 crc kubenswrapper[4719]: I1124 09:27:25.422254 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:25 crc kubenswrapper[4719]: I1124 09:27:25.614083 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-69mrh"] Nov 24 09:27:27 crc kubenswrapper[4719]: I1124 09:27:27.350233 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-69mrh" podUID="cd607312-6e69-4e00-a333-4abe8c7c937a" containerName="registry-server" containerID="cri-o://dd3b3297f739ae9351ae69d6d2dd449209474d2ff8d18ce0c200c5c6808db24e" gracePeriod=2 Nov 24 09:27:27 crc kubenswrapper[4719]: I1124 09:27:27.877295 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:27 crc kubenswrapper[4719]: I1124 09:27:27.913701 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbs5m\" (UniqueName: \"kubernetes.io/projected/cd607312-6e69-4e00-a333-4abe8c7c937a-kube-api-access-qbs5m\") pod \"cd607312-6e69-4e00-a333-4abe8c7c937a\" (UID: \"cd607312-6e69-4e00-a333-4abe8c7c937a\") " Nov 24 09:27:27 crc kubenswrapper[4719]: I1124 09:27:27.913949 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd607312-6e69-4e00-a333-4abe8c7c937a-utilities\") pod \"cd607312-6e69-4e00-a333-4abe8c7c937a\" (UID: \"cd607312-6e69-4e00-a333-4abe8c7c937a\") " Nov 24 09:27:27 crc kubenswrapper[4719]: I1124 09:27:27.913996 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd607312-6e69-4e00-a333-4abe8c7c937a-catalog-content\") pod \"cd607312-6e69-4e00-a333-4abe8c7c937a\" (UID: \"cd607312-6e69-4e00-a333-4abe8c7c937a\") " Nov 24 09:27:27 crc kubenswrapper[4719]: I1124 09:27:27.914716 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd607312-6e69-4e00-a333-4abe8c7c937a-utilities" (OuterVolumeSpecName: "utilities") pod "cd607312-6e69-4e00-a333-4abe8c7c937a" (UID: "cd607312-6e69-4e00-a333-4abe8c7c937a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:27:27 crc kubenswrapper[4719]: I1124 09:27:27.931289 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd607312-6e69-4e00-a333-4abe8c7c937a-kube-api-access-qbs5m" (OuterVolumeSpecName: "kube-api-access-qbs5m") pod "cd607312-6e69-4e00-a333-4abe8c7c937a" (UID: "cd607312-6e69-4e00-a333-4abe8c7c937a"). InnerVolumeSpecName "kube-api-access-qbs5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.019367 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbs5m\" (UniqueName: \"kubernetes.io/projected/cd607312-6e69-4e00-a333-4abe8c7c937a-kube-api-access-qbs5m\") on node \"crc\" DevicePath \"\"" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.019411 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd607312-6e69-4e00-a333-4abe8c7c937a-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.157802 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd607312-6e69-4e00-a333-4abe8c7c937a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd607312-6e69-4e00-a333-4abe8c7c937a" (UID: "cd607312-6e69-4e00-a333-4abe8c7c937a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.225724 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd607312-6e69-4e00-a333-4abe8c7c937a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.361772 4719 generic.go:334] "Generic (PLEG): container finished" podID="cd607312-6e69-4e00-a333-4abe8c7c937a" containerID="dd3b3297f739ae9351ae69d6d2dd449209474d2ff8d18ce0c200c5c6808db24e" exitCode=0 Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.361850 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69mrh" event={"ID":"cd607312-6e69-4e00-a333-4abe8c7c937a","Type":"ContainerDied","Data":"dd3b3297f739ae9351ae69d6d2dd449209474d2ff8d18ce0c200c5c6808db24e"} Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.361917 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-69mrh" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.361956 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69mrh" event={"ID":"cd607312-6e69-4e00-a333-4abe8c7c937a","Type":"ContainerDied","Data":"7abd0afeb8c361851cfd691da334281ef9ceda334c505ef4014b4432e8bf9d97"} Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.361990 4719 scope.go:117] "RemoveContainer" containerID="dd3b3297f739ae9351ae69d6d2dd449209474d2ff8d18ce0c200c5c6808db24e" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.381748 4719 scope.go:117] "RemoveContainer" containerID="8ef811c4d0e09968185df214084de7d406316f28f48b755128c9a272740dd082" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.401588 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-69mrh"] Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.408715 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-69mrh"] Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.419111 4719 scope.go:117] "RemoveContainer" containerID="666d44b5bfa51ef03b1155cbcc05883fb7f1d20b7d014bc28d0517fc873e2878" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.454907 4719 scope.go:117] "RemoveContainer" containerID="dd3b3297f739ae9351ae69d6d2dd449209474d2ff8d18ce0c200c5c6808db24e" Nov 24 09:27:28 crc kubenswrapper[4719]: E1124 09:27:28.455432 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd3b3297f739ae9351ae69d6d2dd449209474d2ff8d18ce0c200c5c6808db24e\": container with ID starting with dd3b3297f739ae9351ae69d6d2dd449209474d2ff8d18ce0c200c5c6808db24e not found: ID does not exist" containerID="dd3b3297f739ae9351ae69d6d2dd449209474d2ff8d18ce0c200c5c6808db24e" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.455467 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3b3297f739ae9351ae69d6d2dd449209474d2ff8d18ce0c200c5c6808db24e"} err="failed to get container status \"dd3b3297f739ae9351ae69d6d2dd449209474d2ff8d18ce0c200c5c6808db24e\": rpc error: code = NotFound desc = could not find container \"dd3b3297f739ae9351ae69d6d2dd449209474d2ff8d18ce0c200c5c6808db24e\": container with ID starting with dd3b3297f739ae9351ae69d6d2dd449209474d2ff8d18ce0c200c5c6808db24e not found: ID does not exist" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.455494 4719 scope.go:117] "RemoveContainer" containerID="8ef811c4d0e09968185df214084de7d406316f28f48b755128c9a272740dd082" Nov 24 09:27:28 crc kubenswrapper[4719]: E1124 09:27:28.455897 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ef811c4d0e09968185df214084de7d406316f28f48b755128c9a272740dd082\": container with ID starting with 8ef811c4d0e09968185df214084de7d406316f28f48b755128c9a272740dd082 not found: ID does not exist" containerID="8ef811c4d0e09968185df214084de7d406316f28f48b755128c9a272740dd082" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.455925 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ef811c4d0e09968185df214084de7d406316f28f48b755128c9a272740dd082"} err="failed to get container status \"8ef811c4d0e09968185df214084de7d406316f28f48b755128c9a272740dd082\": rpc error: code = NotFound desc = could not find container \"8ef811c4d0e09968185df214084de7d406316f28f48b755128c9a272740dd082\": container with ID starting with 8ef811c4d0e09968185df214084de7d406316f28f48b755128c9a272740dd082 not found: ID does not exist" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.455939 4719 scope.go:117] "RemoveContainer" containerID="666d44b5bfa51ef03b1155cbcc05883fb7f1d20b7d014bc28d0517fc873e2878" Nov 24 09:27:28 crc kubenswrapper[4719]: E1124 09:27:28.456315 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"666d44b5bfa51ef03b1155cbcc05883fb7f1d20b7d014bc28d0517fc873e2878\": container with ID starting with 666d44b5bfa51ef03b1155cbcc05883fb7f1d20b7d014bc28d0517fc873e2878 not found: ID does not exist" containerID="666d44b5bfa51ef03b1155cbcc05883fb7f1d20b7d014bc28d0517fc873e2878" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.456341 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"666d44b5bfa51ef03b1155cbcc05883fb7f1d20b7d014bc28d0517fc873e2878"} err="failed to get container status \"666d44b5bfa51ef03b1155cbcc05883fb7f1d20b7d014bc28d0517fc873e2878\": rpc error: code = NotFound desc = could not find container \"666d44b5bfa51ef03b1155cbcc05883fb7f1d20b7d014bc28d0517fc873e2878\": container with ID starting with 666d44b5bfa51ef03b1155cbcc05883fb7f1d20b7d014bc28d0517fc873e2878 not found: ID does not exist" Nov 24 09:27:28 crc kubenswrapper[4719]: I1124 09:27:28.533182 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd607312-6e69-4e00-a333-4abe8c7c937a" path="/var/lib/kubelet/pods/cd607312-6e69-4e00-a333-4abe8c7c937a/volumes" Nov 24 09:28:04 crc kubenswrapper[4719]: I1124 09:28:04.562507 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:28:04 crc kubenswrapper[4719]: I1124 09:28:04.564064 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:28:34 crc kubenswrapper[4719]: I1124 09:28:34.562061 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:28:34 crc kubenswrapper[4719]: I1124 09:28:34.562567 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:29:04 crc kubenswrapper[4719]: I1124 09:29:04.562066 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:29:04 crc kubenswrapper[4719]: I1124 09:29:04.562625 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:29:04 crc kubenswrapper[4719]: I1124 09:29:04.563015 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 09:29:04 crc kubenswrapper[4719]: I1124 09:29:04.563804 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"243cff5b77320b462dfde0084994a0c4bd7eb54c42623909c12e57e5ffc63d4d"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:29:04 crc kubenswrapper[4719]: I1124 09:29:04.563862 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://243cff5b77320b462dfde0084994a0c4bd7eb54c42623909c12e57e5ffc63d4d" gracePeriod=600 Nov 24 09:29:05 crc kubenswrapper[4719]: I1124 09:29:05.188471 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="243cff5b77320b462dfde0084994a0c4bd7eb54c42623909c12e57e5ffc63d4d" exitCode=0 Nov 24 09:29:05 crc kubenswrapper[4719]: I1124 09:29:05.188506 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"243cff5b77320b462dfde0084994a0c4bd7eb54c42623909c12e57e5ffc63d4d"} Nov 24 09:29:05 crc kubenswrapper[4719]: I1124 09:29:05.188804 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1"} Nov 24 09:29:05 crc kubenswrapper[4719]: I1124 09:29:05.188827 4719 scope.go:117] "RemoveContainer" containerID="5d388949a1ffd37364e03d791b16b6a3002ba4f8004dccc17595d5c144ee869b" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.747258 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-722nh"] Nov 24 09:29:16 crc kubenswrapper[4719]: E1124 09:29:16.748311 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd607312-6e69-4e00-a333-4abe8c7c937a" containerName="registry-server" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.748332 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd607312-6e69-4e00-a333-4abe8c7c937a" containerName="registry-server" Nov 24 09:29:16 crc kubenswrapper[4719]: E1124 09:29:16.748379 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd607312-6e69-4e00-a333-4abe8c7c937a" containerName="extract-utilities" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.748388 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd607312-6e69-4e00-a333-4abe8c7c937a" containerName="extract-utilities" Nov 24 09:29:16 crc kubenswrapper[4719]: E1124 09:29:16.748402 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd607312-6e69-4e00-a333-4abe8c7c937a" containerName="extract-content" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.748409 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd607312-6e69-4e00-a333-4abe8c7c937a" containerName="extract-content" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.748671 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd607312-6e69-4e00-a333-4abe8c7c937a" containerName="registry-server" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.750244 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.756388 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-722nh"] Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.808390 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qfhv\" (UniqueName: \"kubernetes.io/projected/82fa294c-3eb4-4678-803e-067d607aa237-kube-api-access-5qfhv\") pod \"certified-operators-722nh\" (UID: \"82fa294c-3eb4-4678-803e-067d607aa237\") " pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.808468 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82fa294c-3eb4-4678-803e-067d607aa237-catalog-content\") pod \"certified-operators-722nh\" (UID: \"82fa294c-3eb4-4678-803e-067d607aa237\") " pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.808659 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82fa294c-3eb4-4678-803e-067d607aa237-utilities\") pod \"certified-operators-722nh\" (UID: \"82fa294c-3eb4-4678-803e-067d607aa237\") " pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.910258 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82fa294c-3eb4-4678-803e-067d607aa237-utilities\") pod \"certified-operators-722nh\" (UID: \"82fa294c-3eb4-4678-803e-067d607aa237\") " pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.910309 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qfhv\" (UniqueName: \"kubernetes.io/projected/82fa294c-3eb4-4678-803e-067d607aa237-kube-api-access-5qfhv\") pod \"certified-operators-722nh\" (UID: \"82fa294c-3eb4-4678-803e-067d607aa237\") " pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.910335 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82fa294c-3eb4-4678-803e-067d607aa237-catalog-content\") pod \"certified-operators-722nh\" (UID: \"82fa294c-3eb4-4678-803e-067d607aa237\") " pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.910702 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82fa294c-3eb4-4678-803e-067d607aa237-utilities\") pod \"certified-operators-722nh\" (UID: \"82fa294c-3eb4-4678-803e-067d607aa237\") " pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.910968 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82fa294c-3eb4-4678-803e-067d607aa237-catalog-content\") pod \"certified-operators-722nh\" (UID: \"82fa294c-3eb4-4678-803e-067d607aa237\") " pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:16 crc kubenswrapper[4719]: I1124 09:29:16.931252 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qfhv\" (UniqueName: \"kubernetes.io/projected/82fa294c-3eb4-4678-803e-067d607aa237-kube-api-access-5qfhv\") pod \"certified-operators-722nh\" (UID: \"82fa294c-3eb4-4678-803e-067d607aa237\") " pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:17 crc kubenswrapper[4719]: I1124 09:29:17.070520 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:17 crc kubenswrapper[4719]: I1124 09:29:17.429934 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-722nh"] Nov 24 09:29:18 crc kubenswrapper[4719]: I1124 09:29:18.299560 4719 generic.go:334] "Generic (PLEG): container finished" podID="82fa294c-3eb4-4678-803e-067d607aa237" containerID="948d4c722f9207365a5998524657e199c73f2a849e38d41e90b10767bcc3f388" exitCode=0 Nov 24 09:29:18 crc kubenswrapper[4719]: I1124 09:29:18.299617 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-722nh" event={"ID":"82fa294c-3eb4-4678-803e-067d607aa237","Type":"ContainerDied","Data":"948d4c722f9207365a5998524657e199c73f2a849e38d41e90b10767bcc3f388"} Nov 24 09:29:18 crc kubenswrapper[4719]: I1124 09:29:18.299649 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-722nh" event={"ID":"82fa294c-3eb4-4678-803e-067d607aa237","Type":"ContainerStarted","Data":"87cc01a9d000e0d35c3ae622413b66f615ddd80f0461fef905227dff29b233b9"} Nov 24 09:29:19 crc kubenswrapper[4719]: I1124 09:29:19.313214 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-722nh" event={"ID":"82fa294c-3eb4-4678-803e-067d607aa237","Type":"ContainerStarted","Data":"1d719253e5b58826d89cb9e3ee4249430cec1ee33807a1e22a147afbb334cb33"} Nov 24 09:29:20 crc kubenswrapper[4719]: I1124 09:29:20.324606 4719 generic.go:334] "Generic (PLEG): container finished" podID="82fa294c-3eb4-4678-803e-067d607aa237" containerID="1d719253e5b58826d89cb9e3ee4249430cec1ee33807a1e22a147afbb334cb33" exitCode=0 Nov 24 09:29:20 crc kubenswrapper[4719]: I1124 09:29:20.324655 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-722nh" event={"ID":"82fa294c-3eb4-4678-803e-067d607aa237","Type":"ContainerDied","Data":"1d719253e5b58826d89cb9e3ee4249430cec1ee33807a1e22a147afbb334cb33"} Nov 24 09:29:21 crc kubenswrapper[4719]: I1124 09:29:21.336156 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-722nh" event={"ID":"82fa294c-3eb4-4678-803e-067d607aa237","Type":"ContainerStarted","Data":"4403cd27e1d94ab5d8b44b1e7b038de8d04a332bf81bad36b07cf390f3c3ba9e"} Nov 24 09:29:21 crc kubenswrapper[4719]: I1124 09:29:21.362764 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-722nh" podStartSLOduration=2.839011722 podStartE2EDuration="5.362745319s" podCreationTimestamp="2025-11-24 09:29:16 +0000 UTC" firstStartedPulling="2025-11-24 09:29:18.301595542 +0000 UTC m=+2134.632868794" lastFinishedPulling="2025-11-24 09:29:20.825329139 +0000 UTC m=+2137.156602391" observedRunningTime="2025-11-24 09:29:21.354439081 +0000 UTC m=+2137.685712343" watchObservedRunningTime="2025-11-24 09:29:21.362745319 +0000 UTC m=+2137.694018571" Nov 24 09:29:27 crc kubenswrapper[4719]: I1124 09:29:27.073253 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:27 crc kubenswrapper[4719]: I1124 09:29:27.074380 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:27 crc kubenswrapper[4719]: I1124 09:29:27.126438 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:27 crc kubenswrapper[4719]: I1124 09:29:27.453199 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:27 crc kubenswrapper[4719]: I1124 09:29:27.505017 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-722nh"] Nov 24 09:29:29 crc kubenswrapper[4719]: I1124 09:29:29.408638 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-722nh" podUID="82fa294c-3eb4-4678-803e-067d607aa237" containerName="registry-server" containerID="cri-o://4403cd27e1d94ab5d8b44b1e7b038de8d04a332bf81bad36b07cf390f3c3ba9e" gracePeriod=2 Nov 24 09:29:29 crc kubenswrapper[4719]: I1124 09:29:29.903709 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.012414 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qfhv\" (UniqueName: \"kubernetes.io/projected/82fa294c-3eb4-4678-803e-067d607aa237-kube-api-access-5qfhv\") pod \"82fa294c-3eb4-4678-803e-067d607aa237\" (UID: \"82fa294c-3eb4-4678-803e-067d607aa237\") " Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.012665 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82fa294c-3eb4-4678-803e-067d607aa237-catalog-content\") pod \"82fa294c-3eb4-4678-803e-067d607aa237\" (UID: \"82fa294c-3eb4-4678-803e-067d607aa237\") " Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.012726 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82fa294c-3eb4-4678-803e-067d607aa237-utilities\") pod \"82fa294c-3eb4-4678-803e-067d607aa237\" (UID: \"82fa294c-3eb4-4678-803e-067d607aa237\") " Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.014064 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82fa294c-3eb4-4678-803e-067d607aa237-utilities" (OuterVolumeSpecName: "utilities") pod "82fa294c-3eb4-4678-803e-067d607aa237" (UID: "82fa294c-3eb4-4678-803e-067d607aa237"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.026342 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82fa294c-3eb4-4678-803e-067d607aa237-kube-api-access-5qfhv" (OuterVolumeSpecName: "kube-api-access-5qfhv") pod "82fa294c-3eb4-4678-803e-067d607aa237" (UID: "82fa294c-3eb4-4678-803e-067d607aa237"). InnerVolumeSpecName "kube-api-access-5qfhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.073921 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82fa294c-3eb4-4678-803e-067d607aa237-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82fa294c-3eb4-4678-803e-067d607aa237" (UID: "82fa294c-3eb4-4678-803e-067d607aa237"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.115134 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82fa294c-3eb4-4678-803e-067d607aa237-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.115173 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82fa294c-3eb4-4678-803e-067d607aa237-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.115187 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qfhv\" (UniqueName: \"kubernetes.io/projected/82fa294c-3eb4-4678-803e-067d607aa237-kube-api-access-5qfhv\") on node \"crc\" DevicePath \"\"" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.422222 4719 generic.go:334] "Generic (PLEG): container finished" podID="82fa294c-3eb4-4678-803e-067d607aa237" containerID="4403cd27e1d94ab5d8b44b1e7b038de8d04a332bf81bad36b07cf390f3c3ba9e" exitCode=0 Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.422264 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-722nh" event={"ID":"82fa294c-3eb4-4678-803e-067d607aa237","Type":"ContainerDied","Data":"4403cd27e1d94ab5d8b44b1e7b038de8d04a332bf81bad36b07cf390f3c3ba9e"} Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.422293 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-722nh" event={"ID":"82fa294c-3eb4-4678-803e-067d607aa237","Type":"ContainerDied","Data":"87cc01a9d000e0d35c3ae622413b66f615ddd80f0461fef905227dff29b233b9"} Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.422312 4719 scope.go:117] "RemoveContainer" containerID="4403cd27e1d94ab5d8b44b1e7b038de8d04a332bf81bad36b07cf390f3c3ba9e" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.422470 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-722nh" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.441909 4719 scope.go:117] "RemoveContainer" containerID="1d719253e5b58826d89cb9e3ee4249430cec1ee33807a1e22a147afbb334cb33" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.468146 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-722nh"] Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.478377 4719 scope.go:117] "RemoveContainer" containerID="948d4c722f9207365a5998524657e199c73f2a849e38d41e90b10767bcc3f388" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.483616 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-722nh"] Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.518803 4719 scope.go:117] "RemoveContainer" containerID="4403cd27e1d94ab5d8b44b1e7b038de8d04a332bf81bad36b07cf390f3c3ba9e" Nov 24 09:29:30 crc kubenswrapper[4719]: E1124 09:29:30.519335 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4403cd27e1d94ab5d8b44b1e7b038de8d04a332bf81bad36b07cf390f3c3ba9e\": container with ID starting with 4403cd27e1d94ab5d8b44b1e7b038de8d04a332bf81bad36b07cf390f3c3ba9e not found: ID does not exist" containerID="4403cd27e1d94ab5d8b44b1e7b038de8d04a332bf81bad36b07cf390f3c3ba9e" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.519372 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4403cd27e1d94ab5d8b44b1e7b038de8d04a332bf81bad36b07cf390f3c3ba9e"} err="failed to get container status \"4403cd27e1d94ab5d8b44b1e7b038de8d04a332bf81bad36b07cf390f3c3ba9e\": rpc error: code = NotFound desc = could not find container \"4403cd27e1d94ab5d8b44b1e7b038de8d04a332bf81bad36b07cf390f3c3ba9e\": container with ID starting with 4403cd27e1d94ab5d8b44b1e7b038de8d04a332bf81bad36b07cf390f3c3ba9e not found: ID does not exist" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.519398 4719 scope.go:117] "RemoveContainer" containerID="1d719253e5b58826d89cb9e3ee4249430cec1ee33807a1e22a147afbb334cb33" Nov 24 09:29:30 crc kubenswrapper[4719]: E1124 09:29:30.519746 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d719253e5b58826d89cb9e3ee4249430cec1ee33807a1e22a147afbb334cb33\": container with ID starting with 1d719253e5b58826d89cb9e3ee4249430cec1ee33807a1e22a147afbb334cb33 not found: ID does not exist" containerID="1d719253e5b58826d89cb9e3ee4249430cec1ee33807a1e22a147afbb334cb33" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.519776 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d719253e5b58826d89cb9e3ee4249430cec1ee33807a1e22a147afbb334cb33"} err="failed to get container status \"1d719253e5b58826d89cb9e3ee4249430cec1ee33807a1e22a147afbb334cb33\": rpc error: code = NotFound desc = could not find container \"1d719253e5b58826d89cb9e3ee4249430cec1ee33807a1e22a147afbb334cb33\": container with ID starting with 1d719253e5b58826d89cb9e3ee4249430cec1ee33807a1e22a147afbb334cb33 not found: ID does not exist" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.519795 4719 scope.go:117] "RemoveContainer" containerID="948d4c722f9207365a5998524657e199c73f2a849e38d41e90b10767bcc3f388" Nov 24 09:29:30 crc kubenswrapper[4719]: E1124 09:29:30.520094 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"948d4c722f9207365a5998524657e199c73f2a849e38d41e90b10767bcc3f388\": container with ID starting with 948d4c722f9207365a5998524657e199c73f2a849e38d41e90b10767bcc3f388 not found: ID does not exist" containerID="948d4c722f9207365a5998524657e199c73f2a849e38d41e90b10767bcc3f388" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.520134 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"948d4c722f9207365a5998524657e199c73f2a849e38d41e90b10767bcc3f388"} err="failed to get container status \"948d4c722f9207365a5998524657e199c73f2a849e38d41e90b10767bcc3f388\": rpc error: code = NotFound desc = could not find container \"948d4c722f9207365a5998524657e199c73f2a849e38d41e90b10767bcc3f388\": container with ID starting with 948d4c722f9207365a5998524657e199c73f2a849e38d41e90b10767bcc3f388 not found: ID does not exist" Nov 24 09:29:30 crc kubenswrapper[4719]: I1124 09:29:30.536974 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82fa294c-3eb4-4678-803e-067d607aa237" path="/var/lib/kubelet/pods/82fa294c-3eb4-4678-803e-067d607aa237/volumes" Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.477124 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.500834 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lmt95"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.505682 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.516272 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.526612 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.537466 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.546580 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.560095 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-zxrnc"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.569114 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-lmt95"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.576864 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.586116 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.594111 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-2vggt"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.606119 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v7tw"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.613339 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bv8nm"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.623119 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-k2jfl"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.630213 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8nvkg"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.641098 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.650313 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mjdm7"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.665086 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dph8w"] Nov 24 09:29:33 crc kubenswrapper[4719]: I1124 09:29:33.671172 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ft68h"] Nov 24 09:29:34 crc kubenswrapper[4719]: I1124 09:29:34.539774 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb" path="/var/lib/kubelet/pods/03c7f5a9-ff2e-4a88-a68e-dd8d7879ceeb/volumes" Nov 24 09:29:34 crc kubenswrapper[4719]: I1124 09:29:34.541837 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f8d82a7-24db-4723-a4bb-33af2d084882" path="/var/lib/kubelet/pods/0f8d82a7-24db-4723-a4bb-33af2d084882/volumes" Nov 24 09:29:34 crc kubenswrapper[4719]: I1124 09:29:34.542595 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51cbaba0-47cd-49ce-9551-8e4e440b7505" path="/var/lib/kubelet/pods/51cbaba0-47cd-49ce-9551-8e4e440b7505/volumes" Nov 24 09:29:34 crc kubenswrapper[4719]: I1124 09:29:34.543469 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c842f7-bace-4901-a2a7-b2e3ca12ff5e" path="/var/lib/kubelet/pods/67c842f7-bace-4901-a2a7-b2e3ca12ff5e/volumes" Nov 24 09:29:34 crc kubenswrapper[4719]: I1124 09:29:34.545078 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87710985-771e-4a43-a5d1-4933e8fc0ecf" path="/var/lib/kubelet/pods/87710985-771e-4a43-a5d1-4933e8fc0ecf/volumes" Nov 24 09:29:34 crc kubenswrapper[4719]: I1124 09:29:34.545904 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce53f85-5ce6-4f87-9212-49c23937f92c" path="/var/lib/kubelet/pods/9ce53f85-5ce6-4f87-9212-49c23937f92c/volumes" Nov 24 09:29:34 crc kubenswrapper[4719]: I1124 09:29:34.546791 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e61d99-60c6-4031-b2ec-69289a6e5d52" path="/var/lib/kubelet/pods/d4e61d99-60c6-4031-b2ec-69289a6e5d52/volumes" Nov 24 09:29:34 crc kubenswrapper[4719]: I1124 09:29:34.548374 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfc89e12-66af-45e9-8f36-bb46e97f0845" path="/var/lib/kubelet/pods/dfc89e12-66af-45e9-8f36-bb46e97f0845/volumes" Nov 24 09:29:34 crc kubenswrapper[4719]: I1124 09:29:34.549745 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0638669-2686-4194-b1e6-794b7eabacf6" path="/var/lib/kubelet/pods/e0638669-2686-4194-b1e6-794b7eabacf6/volumes" Nov 24 09:29:34 crc kubenswrapper[4719]: I1124 09:29:34.551480 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7330eb5-2c71-4ee9-b835-72cc930cecdd" path="/var/lib/kubelet/pods/e7330eb5-2c71-4ee9-b835-72cc930cecdd/volumes" Nov 24 09:29:35 crc kubenswrapper[4719]: I1124 09:29:35.969735 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8wlf9"] Nov 24 09:29:35 crc kubenswrapper[4719]: E1124 09:29:35.970809 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fa294c-3eb4-4678-803e-067d607aa237" containerName="registry-server" Nov 24 09:29:35 crc kubenswrapper[4719]: I1124 09:29:35.970936 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fa294c-3eb4-4678-803e-067d607aa237" containerName="registry-server" Nov 24 09:29:35 crc kubenswrapper[4719]: E1124 09:29:35.971026 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fa294c-3eb4-4678-803e-067d607aa237" containerName="extract-content" Nov 24 09:29:35 crc kubenswrapper[4719]: I1124 09:29:35.971117 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fa294c-3eb4-4678-803e-067d607aa237" containerName="extract-content" Nov 24 09:29:35 crc kubenswrapper[4719]: E1124 09:29:35.971219 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fa294c-3eb4-4678-803e-067d607aa237" containerName="extract-utilities" Nov 24 09:29:35 crc kubenswrapper[4719]: I1124 09:29:35.971285 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fa294c-3eb4-4678-803e-067d607aa237" containerName="extract-utilities" Nov 24 09:29:35 crc kubenswrapper[4719]: I1124 09:29:35.971556 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="82fa294c-3eb4-4678-803e-067d607aa237" containerName="registry-server" Nov 24 09:29:35 crc kubenswrapper[4719]: I1124 09:29:35.973246 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:35 crc kubenswrapper[4719]: I1124 09:29:35.983019 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8wlf9"] Nov 24 09:29:36 crc kubenswrapper[4719]: I1124 09:29:36.038820 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9150973-c4d7-42da-be80-9020ced41b77-catalog-content\") pod \"community-operators-8wlf9\" (UID: \"f9150973-c4d7-42da-be80-9020ced41b77\") " pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:36 crc kubenswrapper[4719]: I1124 09:29:36.039124 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9150973-c4d7-42da-be80-9020ced41b77-utilities\") pod \"community-operators-8wlf9\" (UID: \"f9150973-c4d7-42da-be80-9020ced41b77\") " pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:36 crc kubenswrapper[4719]: I1124 09:29:36.039371 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krdcv\" (UniqueName: \"kubernetes.io/projected/f9150973-c4d7-42da-be80-9020ced41b77-kube-api-access-krdcv\") pod \"community-operators-8wlf9\" (UID: \"f9150973-c4d7-42da-be80-9020ced41b77\") " pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:36 crc kubenswrapper[4719]: I1124 09:29:36.141154 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krdcv\" (UniqueName: \"kubernetes.io/projected/f9150973-c4d7-42da-be80-9020ced41b77-kube-api-access-krdcv\") pod \"community-operators-8wlf9\" (UID: \"f9150973-c4d7-42da-be80-9020ced41b77\") " pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:36 crc kubenswrapper[4719]: I1124 09:29:36.141263 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9150973-c4d7-42da-be80-9020ced41b77-catalog-content\") pod \"community-operators-8wlf9\" (UID: \"f9150973-c4d7-42da-be80-9020ced41b77\") " pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:36 crc kubenswrapper[4719]: I1124 09:29:36.141336 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9150973-c4d7-42da-be80-9020ced41b77-utilities\") pod \"community-operators-8wlf9\" (UID: \"f9150973-c4d7-42da-be80-9020ced41b77\") " pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:36 crc kubenswrapper[4719]: I1124 09:29:36.141798 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9150973-c4d7-42da-be80-9020ced41b77-utilities\") pod \"community-operators-8wlf9\" (UID: \"f9150973-c4d7-42da-be80-9020ced41b77\") " pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:36 crc kubenswrapper[4719]: I1124 09:29:36.142009 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9150973-c4d7-42da-be80-9020ced41b77-catalog-content\") pod \"community-operators-8wlf9\" (UID: \"f9150973-c4d7-42da-be80-9020ced41b77\") " pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:36 crc kubenswrapper[4719]: I1124 09:29:36.164818 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krdcv\" (UniqueName: \"kubernetes.io/projected/f9150973-c4d7-42da-be80-9020ced41b77-kube-api-access-krdcv\") pod \"community-operators-8wlf9\" (UID: \"f9150973-c4d7-42da-be80-9020ced41b77\") " pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:36 crc kubenswrapper[4719]: I1124 09:29:36.297324 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:36 crc kubenswrapper[4719]: I1124 09:29:36.880715 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8wlf9"] Nov 24 09:29:37 crc kubenswrapper[4719]: I1124 09:29:37.491850 4719 generic.go:334] "Generic (PLEG): container finished" podID="f9150973-c4d7-42da-be80-9020ced41b77" containerID="b548fd4df02d756a024b422def061d9e7711b5fc2adcaa6a0a7c0efa1f614c9a" exitCode=0 Nov 24 09:29:37 crc kubenswrapper[4719]: I1124 09:29:37.491896 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wlf9" event={"ID":"f9150973-c4d7-42da-be80-9020ced41b77","Type":"ContainerDied","Data":"b548fd4df02d756a024b422def061d9e7711b5fc2adcaa6a0a7c0efa1f614c9a"} Nov 24 09:29:37 crc kubenswrapper[4719]: I1124 09:29:37.492389 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wlf9" event={"ID":"f9150973-c4d7-42da-be80-9020ced41b77","Type":"ContainerStarted","Data":"be489989032f3d8f4514d4d484fe6f0d49ed174ef78d35f9696de7daac8d6342"} Nov 24 09:29:38 crc kubenswrapper[4719]: I1124 09:29:38.500900 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wlf9" event={"ID":"f9150973-c4d7-42da-be80-9020ced41b77","Type":"ContainerStarted","Data":"b0c223bb3502e216bb877aaeaaff74730f080d8108caf916b66cc7a6164e588d"} Nov 24 09:29:40 crc kubenswrapper[4719]: I1124 09:29:40.518373 4719 generic.go:334] "Generic (PLEG): container finished" podID="f9150973-c4d7-42da-be80-9020ced41b77" containerID="b0c223bb3502e216bb877aaeaaff74730f080d8108caf916b66cc7a6164e588d" exitCode=0 Nov 24 09:29:40 crc kubenswrapper[4719]: I1124 09:29:40.518466 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wlf9" event={"ID":"f9150973-c4d7-42da-be80-9020ced41b77","Type":"ContainerDied","Data":"b0c223bb3502e216bb877aaeaaff74730f080d8108caf916b66cc7a6164e588d"} Nov 24 09:29:41 crc kubenswrapper[4719]: I1124 09:29:41.531156 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wlf9" event={"ID":"f9150973-c4d7-42da-be80-9020ced41b77","Type":"ContainerStarted","Data":"7630b8e8a80921a99bf81f536911fa99ad607d951f828b9b195bf0b08105ff8c"} Nov 24 09:29:41 crc kubenswrapper[4719]: I1124 09:29:41.564965 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8wlf9" podStartSLOduration=3.105774035 podStartE2EDuration="6.56494598s" podCreationTimestamp="2025-11-24 09:29:35 +0000 UTC" firstStartedPulling="2025-11-24 09:29:37.493598989 +0000 UTC m=+2153.824872251" lastFinishedPulling="2025-11-24 09:29:40.952770944 +0000 UTC m=+2157.284044196" observedRunningTime="2025-11-24 09:29:41.557900699 +0000 UTC m=+2157.889173951" watchObservedRunningTime="2025-11-24 09:29:41.56494598 +0000 UTC m=+2157.896219232" Nov 24 09:29:46 crc kubenswrapper[4719]: I1124 09:29:46.298564 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:46 crc kubenswrapper[4719]: I1124 09:29:46.300681 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:46 crc kubenswrapper[4719]: I1124 09:29:46.348982 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:46 crc kubenswrapper[4719]: I1124 09:29:46.641586 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:46 crc kubenswrapper[4719]: I1124 09:29:46.686908 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8wlf9"] Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.344449 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82"] Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.346158 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.348916 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.349797 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.349805 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.349954 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.363741 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82"] Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.379760 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.462469 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.462566 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.462596 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.462623 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh8zg\" (UniqueName: \"kubernetes.io/projected/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-kube-api-access-bh8zg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.462745 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.564546 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.564611 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.564629 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.564648 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh8zg\" (UniqueName: \"kubernetes.io/projected/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-kube-api-access-bh8zg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.564708 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.572232 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.576946 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.579593 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.579641 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.583349 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh8zg\" (UniqueName: \"kubernetes.io/projected/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-kube-api-access-bh8zg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:47 crc kubenswrapper[4719]: I1124 09:29:47.665977 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:29:48 crc kubenswrapper[4719]: I1124 09:29:48.200677 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82"] Nov 24 09:29:48 crc kubenswrapper[4719]: I1124 09:29:48.607798 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" event={"ID":"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f","Type":"ContainerStarted","Data":"0791abe3d37ba5b9e9e60a9969dcee9eb00d68b6d47aa275c83422a7201b87b8"} Nov 24 09:29:48 crc kubenswrapper[4719]: I1124 09:29:48.607975 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8wlf9" podUID="f9150973-c4d7-42da-be80-9020ced41b77" containerName="registry-server" containerID="cri-o://7630b8e8a80921a99bf81f536911fa99ad607d951f828b9b195bf0b08105ff8c" gracePeriod=2 Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.026319 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.100005 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9150973-c4d7-42da-be80-9020ced41b77-catalog-content\") pod \"f9150973-c4d7-42da-be80-9020ced41b77\" (UID: \"f9150973-c4d7-42da-be80-9020ced41b77\") " Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.100079 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krdcv\" (UniqueName: \"kubernetes.io/projected/f9150973-c4d7-42da-be80-9020ced41b77-kube-api-access-krdcv\") pod \"f9150973-c4d7-42da-be80-9020ced41b77\" (UID: \"f9150973-c4d7-42da-be80-9020ced41b77\") " Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.100267 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9150973-c4d7-42da-be80-9020ced41b77-utilities\") pod \"f9150973-c4d7-42da-be80-9020ced41b77\" (UID: \"f9150973-c4d7-42da-be80-9020ced41b77\") " Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.103476 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9150973-c4d7-42da-be80-9020ced41b77-utilities" (OuterVolumeSpecName: "utilities") pod "f9150973-c4d7-42da-be80-9020ced41b77" (UID: "f9150973-c4d7-42da-be80-9020ced41b77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.107161 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9150973-c4d7-42da-be80-9020ced41b77-kube-api-access-krdcv" (OuterVolumeSpecName: "kube-api-access-krdcv") pod "f9150973-c4d7-42da-be80-9020ced41b77" (UID: "f9150973-c4d7-42da-be80-9020ced41b77"). InnerVolumeSpecName "kube-api-access-krdcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.202364 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krdcv\" (UniqueName: \"kubernetes.io/projected/f9150973-c4d7-42da-be80-9020ced41b77-kube-api-access-krdcv\") on node \"crc\" DevicePath \"\"" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.202414 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9150973-c4d7-42da-be80-9020ced41b77-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.622126 4719 generic.go:334] "Generic (PLEG): container finished" podID="f9150973-c4d7-42da-be80-9020ced41b77" containerID="7630b8e8a80921a99bf81f536911fa99ad607d951f828b9b195bf0b08105ff8c" exitCode=0 Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.622203 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wlf9" event={"ID":"f9150973-c4d7-42da-be80-9020ced41b77","Type":"ContainerDied","Data":"7630b8e8a80921a99bf81f536911fa99ad607d951f828b9b195bf0b08105ff8c"} Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.622262 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8wlf9" event={"ID":"f9150973-c4d7-42da-be80-9020ced41b77","Type":"ContainerDied","Data":"be489989032f3d8f4514d4d484fe6f0d49ed174ef78d35f9696de7daac8d6342"} Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.622284 4719 scope.go:117] "RemoveContainer" containerID="7630b8e8a80921a99bf81f536911fa99ad607d951f828b9b195bf0b08105ff8c" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.622458 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8wlf9" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.625403 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9150973-c4d7-42da-be80-9020ced41b77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9150973-c4d7-42da-be80-9020ced41b77" (UID: "f9150973-c4d7-42da-be80-9020ced41b77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.629736 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" event={"ID":"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f","Type":"ContainerStarted","Data":"ded8ff6fe28ce79cd4736bab8d8e4488084b8f4c430690581648e3635f8935ef"} Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.655923 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" podStartSLOduration=2.266311457 podStartE2EDuration="2.655898061s" podCreationTimestamp="2025-11-24 09:29:47 +0000 UTC" firstStartedPulling="2025-11-24 09:29:48.208844554 +0000 UTC m=+2164.540117806" lastFinishedPulling="2025-11-24 09:29:48.598431158 +0000 UTC m=+2164.929704410" observedRunningTime="2025-11-24 09:29:49.648133869 +0000 UTC m=+2165.979407141" watchObservedRunningTime="2025-11-24 09:29:49.655898061 +0000 UTC m=+2165.987171333" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.669987 4719 scope.go:117] "RemoveContainer" containerID="b0c223bb3502e216bb877aaeaaff74730f080d8108caf916b66cc7a6164e588d" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.715873 4719 scope.go:117] "RemoveContainer" containerID="b548fd4df02d756a024b422def061d9e7711b5fc2adcaa6a0a7c0efa1f614c9a" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.719141 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9150973-c4d7-42da-be80-9020ced41b77-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.754694 4719 scope.go:117] "RemoveContainer" containerID="7630b8e8a80921a99bf81f536911fa99ad607d951f828b9b195bf0b08105ff8c" Nov 24 09:29:49 crc kubenswrapper[4719]: E1124 09:29:49.755185 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7630b8e8a80921a99bf81f536911fa99ad607d951f828b9b195bf0b08105ff8c\": container with ID starting with 7630b8e8a80921a99bf81f536911fa99ad607d951f828b9b195bf0b08105ff8c not found: ID does not exist" containerID="7630b8e8a80921a99bf81f536911fa99ad607d951f828b9b195bf0b08105ff8c" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.755230 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7630b8e8a80921a99bf81f536911fa99ad607d951f828b9b195bf0b08105ff8c"} err="failed to get container status \"7630b8e8a80921a99bf81f536911fa99ad607d951f828b9b195bf0b08105ff8c\": rpc error: code = NotFound desc = could not find container \"7630b8e8a80921a99bf81f536911fa99ad607d951f828b9b195bf0b08105ff8c\": container with ID starting with 7630b8e8a80921a99bf81f536911fa99ad607d951f828b9b195bf0b08105ff8c not found: ID does not exist" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.755253 4719 scope.go:117] "RemoveContainer" containerID="b0c223bb3502e216bb877aaeaaff74730f080d8108caf916b66cc7a6164e588d" Nov 24 09:29:49 crc kubenswrapper[4719]: E1124 09:29:49.755641 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0c223bb3502e216bb877aaeaaff74730f080d8108caf916b66cc7a6164e588d\": container with ID starting with b0c223bb3502e216bb877aaeaaff74730f080d8108caf916b66cc7a6164e588d not found: ID does not exist" containerID="b0c223bb3502e216bb877aaeaaff74730f080d8108caf916b66cc7a6164e588d" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.755663 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0c223bb3502e216bb877aaeaaff74730f080d8108caf916b66cc7a6164e588d"} err="failed to get container status \"b0c223bb3502e216bb877aaeaaff74730f080d8108caf916b66cc7a6164e588d\": rpc error: code = NotFound desc = could not find container \"b0c223bb3502e216bb877aaeaaff74730f080d8108caf916b66cc7a6164e588d\": container with ID starting with b0c223bb3502e216bb877aaeaaff74730f080d8108caf916b66cc7a6164e588d not found: ID does not exist" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.755692 4719 scope.go:117] "RemoveContainer" containerID="b548fd4df02d756a024b422def061d9e7711b5fc2adcaa6a0a7c0efa1f614c9a" Nov 24 09:29:49 crc kubenswrapper[4719]: E1124 09:29:49.755983 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b548fd4df02d756a024b422def061d9e7711b5fc2adcaa6a0a7c0efa1f614c9a\": container with ID starting with b548fd4df02d756a024b422def061d9e7711b5fc2adcaa6a0a7c0efa1f614c9a not found: ID does not exist" containerID="b548fd4df02d756a024b422def061d9e7711b5fc2adcaa6a0a7c0efa1f614c9a" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.756018 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b548fd4df02d756a024b422def061d9e7711b5fc2adcaa6a0a7c0efa1f614c9a"} err="failed to get container status \"b548fd4df02d756a024b422def061d9e7711b5fc2adcaa6a0a7c0efa1f614c9a\": rpc error: code = NotFound desc = could not find container \"b548fd4df02d756a024b422def061d9e7711b5fc2adcaa6a0a7c0efa1f614c9a\": container with ID starting with b548fd4df02d756a024b422def061d9e7711b5fc2adcaa6a0a7c0efa1f614c9a not found: ID does not exist" Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.972784 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8wlf9"] Nov 24 09:29:49 crc kubenswrapper[4719]: I1124 09:29:49.983616 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8wlf9"] Nov 24 09:29:50 crc kubenswrapper[4719]: I1124 09:29:50.532009 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9150973-c4d7-42da-be80-9020ced41b77" path="/var/lib/kubelet/pods/f9150973-c4d7-42da-be80-9020ced41b77/volumes" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.147287 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv"] Nov 24 09:30:00 crc kubenswrapper[4719]: E1124 09:30:00.149361 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9150973-c4d7-42da-be80-9020ced41b77" containerName="registry-server" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.149496 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9150973-c4d7-42da-be80-9020ced41b77" containerName="registry-server" Nov 24 09:30:00 crc kubenswrapper[4719]: E1124 09:30:00.149595 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9150973-c4d7-42da-be80-9020ced41b77" containerName="extract-content" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.149670 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9150973-c4d7-42da-be80-9020ced41b77" containerName="extract-content" Nov 24 09:30:00 crc kubenswrapper[4719]: E1124 09:30:00.149762 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9150973-c4d7-42da-be80-9020ced41b77" containerName="extract-utilities" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.149841 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9150973-c4d7-42da-be80-9020ced41b77" containerName="extract-utilities" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.150214 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9150973-c4d7-42da-be80-9020ced41b77" containerName="registry-server" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.151186 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.153912 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.155208 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.156942 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv"] Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.246757 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-config-volume\") pod \"collect-profiles-29399610-rjhbv\" (UID: \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.246978 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8rkq\" (UniqueName: \"kubernetes.io/projected/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-kube-api-access-k8rkq\") pod \"collect-profiles-29399610-rjhbv\" (UID: \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.247007 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-secret-volume\") pod \"collect-profiles-29399610-rjhbv\" (UID: \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.348297 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8rkq\" (UniqueName: \"kubernetes.io/projected/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-kube-api-access-k8rkq\") pod \"collect-profiles-29399610-rjhbv\" (UID: \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.348366 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-secret-volume\") pod \"collect-profiles-29399610-rjhbv\" (UID: \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.348408 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-config-volume\") pod \"collect-profiles-29399610-rjhbv\" (UID: \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.349700 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-config-volume\") pod \"collect-profiles-29399610-rjhbv\" (UID: \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.354638 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-secret-volume\") pod \"collect-profiles-29399610-rjhbv\" (UID: \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.367141 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8rkq\" (UniqueName: \"kubernetes.io/projected/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-kube-api-access-k8rkq\") pod \"collect-profiles-29399610-rjhbv\" (UID: \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.475192 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" Nov 24 09:30:00 crc kubenswrapper[4719]: I1124 09:30:00.905140 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv"] Nov 24 09:30:01 crc kubenswrapper[4719]: I1124 09:30:01.751352 4719 generic.go:334] "Generic (PLEG): container finished" podID="0c32a80c-2ba9-4afc-9e04-6bec58abaa4e" containerID="1dda1bd28a2a67b5781a65df160f6af79fe29b71515a8044cd091aa60bc3569a" exitCode=0 Nov 24 09:30:01 crc kubenswrapper[4719]: I1124 09:30:01.751417 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" event={"ID":"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e","Type":"ContainerDied","Data":"1dda1bd28a2a67b5781a65df160f6af79fe29b71515a8044cd091aa60bc3569a"} Nov 24 09:30:01 crc kubenswrapper[4719]: I1124 09:30:01.751629 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" event={"ID":"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e","Type":"ContainerStarted","Data":"0df3060fdc6b4316f0ed4f502ad0868c70614b87f18c75354e81bf4a3de3862c"} Nov 24 09:30:02 crc kubenswrapper[4719]: I1124 09:30:02.761393 4719 generic.go:334] "Generic (PLEG): container finished" podID="63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f" containerID="ded8ff6fe28ce79cd4736bab8d8e4488084b8f4c430690581648e3635f8935ef" exitCode=0 Nov 24 09:30:02 crc kubenswrapper[4719]: I1124 09:30:02.761466 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" event={"ID":"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f","Type":"ContainerDied","Data":"ded8ff6fe28ce79cd4736bab8d8e4488084b8f4c430690581648e3635f8935ef"} Nov 24 09:30:03 crc kubenswrapper[4719]: I1124 09:30:03.095425 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" Nov 24 09:30:03 crc kubenswrapper[4719]: I1124 09:30:03.208201 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-config-volume\") pod \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\" (UID: \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\") " Nov 24 09:30:03 crc kubenswrapper[4719]: I1124 09:30:03.208284 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8rkq\" (UniqueName: \"kubernetes.io/projected/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-kube-api-access-k8rkq\") pod \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\" (UID: \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\") " Nov 24 09:30:03 crc kubenswrapper[4719]: I1124 09:30:03.208302 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-secret-volume\") pod \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\" (UID: \"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e\") " Nov 24 09:30:03 crc kubenswrapper[4719]: I1124 09:30:03.209169 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-config-volume" (OuterVolumeSpecName: "config-volume") pod "0c32a80c-2ba9-4afc-9e04-6bec58abaa4e" (UID: "0c32a80c-2ba9-4afc-9e04-6bec58abaa4e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:30:03 crc kubenswrapper[4719]: I1124 09:30:03.215263 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0c32a80c-2ba9-4afc-9e04-6bec58abaa4e" (UID: "0c32a80c-2ba9-4afc-9e04-6bec58abaa4e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:30:03 crc kubenswrapper[4719]: I1124 09:30:03.215305 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-kube-api-access-k8rkq" (OuterVolumeSpecName: "kube-api-access-k8rkq") pod "0c32a80c-2ba9-4afc-9e04-6bec58abaa4e" (UID: "0c32a80c-2ba9-4afc-9e04-6bec58abaa4e"). InnerVolumeSpecName "kube-api-access-k8rkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:30:03 crc kubenswrapper[4719]: I1124 09:30:03.310377 4719 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:30:03 crc kubenswrapper[4719]: I1124 09:30:03.310415 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8rkq\" (UniqueName: \"kubernetes.io/projected/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-kube-api-access-k8rkq\") on node \"crc\" DevicePath \"\"" Nov 24 09:30:03 crc kubenswrapper[4719]: I1124 09:30:03.310427 4719 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:30:03 crc kubenswrapper[4719]: I1124 09:30:03.776154 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" Nov 24 09:30:03 crc kubenswrapper[4719]: I1124 09:30:03.777445 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv" event={"ID":"0c32a80c-2ba9-4afc-9e04-6bec58abaa4e","Type":"ContainerDied","Data":"0df3060fdc6b4316f0ed4f502ad0868c70614b87f18c75354e81bf4a3de3862c"} Nov 24 09:30:03 crc kubenswrapper[4719]: I1124 09:30:03.777501 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0df3060fdc6b4316f0ed4f502ad0868c70614b87f18c75354e81bf4a3de3862c" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.173059 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q"] Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.180381 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399565-zmc2q"] Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.191652 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.327476 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh8zg\" (UniqueName: \"kubernetes.io/projected/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-kube-api-access-bh8zg\") pod \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.327549 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-ssh-key\") pod \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.327580 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-inventory\") pod \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.327642 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-ceph\") pod \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.327754 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-repo-setup-combined-ca-bundle\") pod \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\" (UID: \"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f\") " Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.332759 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-ceph" (OuterVolumeSpecName: "ceph") pod "63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f" (UID: "63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.334146 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-kube-api-access-bh8zg" (OuterVolumeSpecName: "kube-api-access-bh8zg") pod "63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f" (UID: "63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f"). InnerVolumeSpecName "kube-api-access-bh8zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.341257 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f" (UID: "63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.356908 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f" (UID: "63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.363061 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-inventory" (OuterVolumeSpecName: "inventory") pod "63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f" (UID: "63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.430922 4719 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.430956 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh8zg\" (UniqueName: \"kubernetes.io/projected/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-kube-api-access-bh8zg\") on node \"crc\" DevicePath \"\"" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.430969 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.430978 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.430989 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.536456 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0b74b9b-50d6-454d-b527-a5980f7d762e" path="/var/lib/kubelet/pods/c0b74b9b-50d6-454d-b527-a5980f7d762e/volumes" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.788296 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" event={"ID":"63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f","Type":"ContainerDied","Data":"0791abe3d37ba5b9e9e60a9969dcee9eb00d68b6d47aa275c83422a7201b87b8"} Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.788333 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0791abe3d37ba5b9e9e60a9969dcee9eb00d68b6d47aa275c83422a7201b87b8" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.788341 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.957375 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx"] Nov 24 09:30:04 crc kubenswrapper[4719]: E1124 09:30:04.957808 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.957827 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 09:30:04 crc kubenswrapper[4719]: E1124 09:30:04.957857 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c32a80c-2ba9-4afc-9e04-6bec58abaa4e" containerName="collect-profiles" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.957865 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c32a80c-2ba9-4afc-9e04-6bec58abaa4e" containerName="collect-profiles" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.958050 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c32a80c-2ba9-4afc-9e04-6bec58abaa4e" containerName="collect-profiles" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.958077 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.958664 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.962464 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.962723 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.962942 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.963102 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.963198 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:30:04 crc kubenswrapper[4719]: I1124 09:30:04.966121 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx"] Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.043241 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.043809 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.044029 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.044258 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z2hp\" (UniqueName: \"kubernetes.io/projected/2825c32a-3ceb-4ba8-a522-554244ca93dd-kube-api-access-9z2hp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.044450 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.146840 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.147178 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z2hp\" (UniqueName: \"kubernetes.io/projected/2825c32a-3ceb-4ba8-a522-554244ca93dd-kube-api-access-9z2hp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.147320 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.147500 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.147611 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.151740 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.152177 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.152533 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.152797 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.165029 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z2hp\" (UniqueName: \"kubernetes.io/projected/2825c32a-3ceb-4ba8-a522-554244ca93dd-kube-api-access-9z2hp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.307555 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:30:05 crc kubenswrapper[4719]: I1124 09:30:05.838965 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx"] Nov 24 09:30:05 crc kubenswrapper[4719]: W1124 09:30:05.850874 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2825c32a_3ceb_4ba8_a522_554244ca93dd.slice/crio-3aac3721bdddc8794ee1780370e30c7d6040fc805eab0d1ed6342475e0f26089 WatchSource:0}: Error finding container 3aac3721bdddc8794ee1780370e30c7d6040fc805eab0d1ed6342475e0f26089: Status 404 returned error can't find the container with id 3aac3721bdddc8794ee1780370e30c7d6040fc805eab0d1ed6342475e0f26089 Nov 24 09:30:06 crc kubenswrapper[4719]: I1124 09:30:06.823322 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" event={"ID":"2825c32a-3ceb-4ba8-a522-554244ca93dd","Type":"ContainerStarted","Data":"3aac3721bdddc8794ee1780370e30c7d6040fc805eab0d1ed6342475e0f26089"} Nov 24 09:30:07 crc kubenswrapper[4719]: I1124 09:30:07.834013 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" event={"ID":"2825c32a-3ceb-4ba8-a522-554244ca93dd","Type":"ContainerStarted","Data":"61c13126bbb42db19c5a395498eebd5eef7fee708b61bcd24022c9a6fcd81a4f"} Nov 24 09:30:07 crc kubenswrapper[4719]: I1124 09:30:07.857921 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" podStartSLOduration=3.094826577 podStartE2EDuration="3.857900296s" podCreationTimestamp="2025-11-24 09:30:04 +0000 UTC" firstStartedPulling="2025-11-24 09:30:05.852974155 +0000 UTC m=+2182.184247427" lastFinishedPulling="2025-11-24 09:30:06.616047894 +0000 UTC m=+2182.947321146" observedRunningTime="2025-11-24 09:30:07.853257974 +0000 UTC m=+2184.184531226" watchObservedRunningTime="2025-11-24 09:30:07.857900296 +0000 UTC m=+2184.189173548" Nov 24 09:30:10 crc kubenswrapper[4719]: I1124 09:30:10.866583 4719 scope.go:117] "RemoveContainer" containerID="1c8a44e4051e38a8fbe6bd555013142154c498f23f9cbe1cbea604e08f72102b" Nov 24 09:30:10 crc kubenswrapper[4719]: I1124 09:30:10.938346 4719 scope.go:117] "RemoveContainer" containerID="1b59ee9d23e52510a03a492c3244794091c9706755f09a92fb9d69b40d78ef10" Nov 24 09:30:10 crc kubenswrapper[4719]: I1124 09:30:10.973900 4719 scope.go:117] "RemoveContainer" containerID="0e159574a8f6d1aa52fbdcd18914875b209f808d7cd57ead49adc810d278920f" Nov 24 09:30:11 crc kubenswrapper[4719]: I1124 09:30:11.033521 4719 scope.go:117] "RemoveContainer" containerID="360e09179a9ae78823ab350d8c411b432ca3238a792954574315d82e545b661e" Nov 24 09:30:11 crc kubenswrapper[4719]: I1124 09:30:11.066572 4719 scope.go:117] "RemoveContainer" containerID="a77e5d6526f1d855a85a1f06662e98c310a8d1531425a629b642f242ba0e95aa" Nov 24 09:30:11 crc kubenswrapper[4719]: I1124 09:30:11.120469 4719 scope.go:117] "RemoveContainer" containerID="0d27cc908daee8716c49620ec3a9e45828f0ed247e1aed219ae9378e707906dd" Nov 24 09:30:11 crc kubenswrapper[4719]: I1124 09:30:11.189937 4719 scope.go:117] "RemoveContainer" containerID="117a45896fcdf118e58703d73e254195d7d7d52e29dd4cb1f3e15184cd223ac0" Nov 24 09:30:11 crc kubenswrapper[4719]: I1124 09:30:11.227760 4719 scope.go:117] "RemoveContainer" containerID="c89f8519f492c64d9ebba9faa6d076032ace204c019d81e8ea3cea13dea82ef1" Nov 24 09:30:11 crc kubenswrapper[4719]: I1124 09:30:11.286928 4719 scope.go:117] "RemoveContainer" containerID="c24fce013632f171e8bd789580522590bd564a809fd1bd6831b7865613ed2227" Nov 24 09:30:11 crc kubenswrapper[4719]: I1124 09:30:11.392609 4719 scope.go:117] "RemoveContainer" containerID="d47c12f105bfae403dfd822911469c934da096619c8429cc8a050e8eb3f5ac00" Nov 24 09:30:11 crc kubenswrapper[4719]: I1124 09:30:11.425278 4719 scope.go:117] "RemoveContainer" containerID="e3a56ea6a41bc25619f8e603ea1affc41f06dc90a2b14c6687009e8de23f33f6" Nov 24 09:31:04 crc kubenswrapper[4719]: I1124 09:31:04.562449 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:31:04 crc kubenswrapper[4719]: I1124 09:31:04.563005 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:31:34 crc kubenswrapper[4719]: I1124 09:31:34.562185 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:31:34 crc kubenswrapper[4719]: I1124 09:31:34.562698 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:31:50 crc kubenswrapper[4719]: I1124 09:31:50.656927 4719 generic.go:334] "Generic (PLEG): container finished" podID="2825c32a-3ceb-4ba8-a522-554244ca93dd" containerID="61c13126bbb42db19c5a395498eebd5eef7fee708b61bcd24022c9a6fcd81a4f" exitCode=0 Nov 24 09:31:50 crc kubenswrapper[4719]: I1124 09:31:50.656970 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" event={"ID":"2825c32a-3ceb-4ba8-a522-554244ca93dd","Type":"ContainerDied","Data":"61c13126bbb42db19c5a395498eebd5eef7fee708b61bcd24022c9a6fcd81a4f"} Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.050364 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.167191 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-inventory\") pod \"2825c32a-3ceb-4ba8-a522-554244ca93dd\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.168356 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z2hp\" (UniqueName: \"kubernetes.io/projected/2825c32a-3ceb-4ba8-a522-554244ca93dd-kube-api-access-9z2hp\") pod \"2825c32a-3ceb-4ba8-a522-554244ca93dd\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.168400 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-ssh-key\") pod \"2825c32a-3ceb-4ba8-a522-554244ca93dd\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.168496 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-ceph\") pod \"2825c32a-3ceb-4ba8-a522-554244ca93dd\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.168822 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-bootstrap-combined-ca-bundle\") pod \"2825c32a-3ceb-4ba8-a522-554244ca93dd\" (UID: \"2825c32a-3ceb-4ba8-a522-554244ca93dd\") " Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.173585 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-ceph" (OuterVolumeSpecName: "ceph") pod "2825c32a-3ceb-4ba8-a522-554244ca93dd" (UID: "2825c32a-3ceb-4ba8-a522-554244ca93dd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.173944 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2825c32a-3ceb-4ba8-a522-554244ca93dd" (UID: "2825c32a-3ceb-4ba8-a522-554244ca93dd"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.174196 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2825c32a-3ceb-4ba8-a522-554244ca93dd-kube-api-access-9z2hp" (OuterVolumeSpecName: "kube-api-access-9z2hp") pod "2825c32a-3ceb-4ba8-a522-554244ca93dd" (UID: "2825c32a-3ceb-4ba8-a522-554244ca93dd"). InnerVolumeSpecName "kube-api-access-9z2hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.193491 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-inventory" (OuterVolumeSpecName: "inventory") pod "2825c32a-3ceb-4ba8-a522-554244ca93dd" (UID: "2825c32a-3ceb-4ba8-a522-554244ca93dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.201171 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2825c32a-3ceb-4ba8-a522-554244ca93dd" (UID: "2825c32a-3ceb-4ba8-a522-554244ca93dd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.271341 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.271390 4719 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.271404 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.271417 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z2hp\" (UniqueName: \"kubernetes.io/projected/2825c32a-3ceb-4ba8-a522-554244ca93dd-kube-api-access-9z2hp\") on node \"crc\" DevicePath \"\"" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.271425 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2825c32a-3ceb-4ba8-a522-554244ca93dd-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.674122 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" event={"ID":"2825c32a-3ceb-4ba8-a522-554244ca93dd","Type":"ContainerDied","Data":"3aac3721bdddc8794ee1780370e30c7d6040fc805eab0d1ed6342475e0f26089"} Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.674158 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aac3721bdddc8794ee1780370e30c7d6040fc805eab0d1ed6342475e0f26089" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.674190 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.768474 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l"] Nov 24 09:31:52 crc kubenswrapper[4719]: E1124 09:31:52.768814 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2825c32a-3ceb-4ba8-a522-554244ca93dd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.768832 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="2825c32a-3ceb-4ba8-a522-554244ca93dd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.769064 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="2825c32a-3ceb-4ba8-a522-554244ca93dd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.769667 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.775235 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.775511 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.775518 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.775987 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.779342 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.779791 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l"] Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.879984 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.880025 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.880092 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.880124 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n5l5\" (UniqueName: \"kubernetes.io/projected/70b5dfb2-d163-4188-989e-e1f2a9d84026-kube-api-access-8n5l5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.981757 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.982911 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.983120 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.983191 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n5l5\" (UniqueName: \"kubernetes.io/projected/70b5dfb2-d163-4188-989e-e1f2a9d84026-kube-api-access-8n5l5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.985638 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.990361 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:31:52 crc kubenswrapper[4719]: I1124 09:31:52.993536 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:31:53 crc kubenswrapper[4719]: I1124 09:31:53.001095 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n5l5\" (UniqueName: \"kubernetes.io/projected/70b5dfb2-d163-4188-989e-e1f2a9d84026-kube-api-access-8n5l5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:31:53 crc kubenswrapper[4719]: I1124 09:31:53.085583 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:31:53 crc kubenswrapper[4719]: I1124 09:31:53.612990 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l"] Nov 24 09:31:53 crc kubenswrapper[4719]: I1124 09:31:53.689297 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" event={"ID":"70b5dfb2-d163-4188-989e-e1f2a9d84026","Type":"ContainerStarted","Data":"94f23e8333ba0f574e88b97dcf6b51a5c3ce2bc989ad82b62f9e02e87b83e8eb"} Nov 24 09:31:54 crc kubenswrapper[4719]: I1124 09:31:54.698545 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" event={"ID":"70b5dfb2-d163-4188-989e-e1f2a9d84026","Type":"ContainerStarted","Data":"74fb79043b4a6e4d46ee1dc89602fbfe10da0d2fc1ed5693ee170655fcc379bb"} Nov 24 09:31:54 crc kubenswrapper[4719]: I1124 09:31:54.722387 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" podStartSLOduration=1.986632149 podStartE2EDuration="2.722362755s" podCreationTimestamp="2025-11-24 09:31:52 +0000 UTC" firstStartedPulling="2025-11-24 09:31:53.622990416 +0000 UTC m=+2289.954263668" lastFinishedPulling="2025-11-24 09:31:54.358721022 +0000 UTC m=+2290.689994274" observedRunningTime="2025-11-24 09:31:54.715633823 +0000 UTC m=+2291.046907095" watchObservedRunningTime="2025-11-24 09:31:54.722362755 +0000 UTC m=+2291.053636047" Nov 24 09:32:04 crc kubenswrapper[4719]: I1124 09:32:04.561915 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:32:04 crc kubenswrapper[4719]: I1124 09:32:04.562366 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:32:04 crc kubenswrapper[4719]: I1124 09:32:04.562411 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 09:32:04 crc kubenswrapper[4719]: I1124 09:32:04.563094 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:32:04 crc kubenswrapper[4719]: I1124 09:32:04.563146 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" gracePeriod=600 Nov 24 09:32:04 crc kubenswrapper[4719]: E1124 09:32:04.682759 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:32:04 crc kubenswrapper[4719]: I1124 09:32:04.778596 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" exitCode=0 Nov 24 09:32:04 crc kubenswrapper[4719]: I1124 09:32:04.778678 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1"} Nov 24 09:32:04 crc kubenswrapper[4719]: I1124 09:32:04.778925 4719 scope.go:117] "RemoveContainer" containerID="243cff5b77320b462dfde0084994a0c4bd7eb54c42623909c12e57e5ffc63d4d" Nov 24 09:32:04 crc kubenswrapper[4719]: I1124 09:32:04.779583 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:32:04 crc kubenswrapper[4719]: E1124 09:32:04.779953 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:32:20 crc kubenswrapper[4719]: I1124 09:32:20.521458 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:32:20 crc kubenswrapper[4719]: E1124 09:32:20.522174 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:32:21 crc kubenswrapper[4719]: I1124 09:32:21.937972 4719 generic.go:334] "Generic (PLEG): container finished" podID="70b5dfb2-d163-4188-989e-e1f2a9d84026" containerID="74fb79043b4a6e4d46ee1dc89602fbfe10da0d2fc1ed5693ee170655fcc379bb" exitCode=0 Nov 24 09:32:21 crc kubenswrapper[4719]: I1124 09:32:21.938031 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" event={"ID":"70b5dfb2-d163-4188-989e-e1f2a9d84026","Type":"ContainerDied","Data":"74fb79043b4a6e4d46ee1dc89602fbfe10da0d2fc1ed5693ee170655fcc379bb"} Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.371810 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.461214 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n5l5\" (UniqueName: \"kubernetes.io/projected/70b5dfb2-d163-4188-989e-e1f2a9d84026-kube-api-access-8n5l5\") pod \"70b5dfb2-d163-4188-989e-e1f2a9d84026\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.461307 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-inventory\") pod \"70b5dfb2-d163-4188-989e-e1f2a9d84026\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.461326 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-ssh-key\") pod \"70b5dfb2-d163-4188-989e-e1f2a9d84026\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.461409 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-ceph\") pod \"70b5dfb2-d163-4188-989e-e1f2a9d84026\" (UID: \"70b5dfb2-d163-4188-989e-e1f2a9d84026\") " Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.465985 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-ceph" (OuterVolumeSpecName: "ceph") pod "70b5dfb2-d163-4188-989e-e1f2a9d84026" (UID: "70b5dfb2-d163-4188-989e-e1f2a9d84026"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.466215 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70b5dfb2-d163-4188-989e-e1f2a9d84026-kube-api-access-8n5l5" (OuterVolumeSpecName: "kube-api-access-8n5l5") pod "70b5dfb2-d163-4188-989e-e1f2a9d84026" (UID: "70b5dfb2-d163-4188-989e-e1f2a9d84026"). InnerVolumeSpecName "kube-api-access-8n5l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.487860 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-inventory" (OuterVolumeSpecName: "inventory") pod "70b5dfb2-d163-4188-989e-e1f2a9d84026" (UID: "70b5dfb2-d163-4188-989e-e1f2a9d84026"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.491519 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "70b5dfb2-d163-4188-989e-e1f2a9d84026" (UID: "70b5dfb2-d163-4188-989e-e1f2a9d84026"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.563302 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.563338 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n5l5\" (UniqueName: \"kubernetes.io/projected/70b5dfb2-d163-4188-989e-e1f2a9d84026-kube-api-access-8n5l5\") on node \"crc\" DevicePath \"\"" Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.563353 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.563364 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70b5dfb2-d163-4188-989e-e1f2a9d84026-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.954356 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" event={"ID":"70b5dfb2-d163-4188-989e-e1f2a9d84026","Type":"ContainerDied","Data":"94f23e8333ba0f574e88b97dcf6b51a5c3ce2bc989ad82b62f9e02e87b83e8eb"} Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.954629 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f23e8333ba0f574e88b97dcf6b51a5c3ce2bc989ad82b62f9e02e87b83e8eb" Nov 24 09:32:23 crc kubenswrapper[4719]: I1124 09:32:23.954412 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.064368 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw"] Nov 24 09:32:24 crc kubenswrapper[4719]: E1124 09:32:24.064813 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b5dfb2-d163-4188-989e-e1f2a9d84026" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.064829 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b5dfb2-d163-4188-989e-e1f2a9d84026" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.065153 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b5dfb2-d163-4188-989e-e1f2a9d84026" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.065889 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.069609 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.072377 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.072421 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.072472 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.073663 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw"] Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.074689 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.171777 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.171847 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.172141 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.172309 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvd8w\" (UniqueName: \"kubernetes.io/projected/6d644fcc-6653-41e6-835d-430f31694bd1-kube-api-access-gvd8w\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.274311 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvd8w\" (UniqueName: \"kubernetes.io/projected/6d644fcc-6653-41e6-835d-430f31694bd1-kube-api-access-gvd8w\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.274420 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.274470 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.274565 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.288675 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.289884 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.290065 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvd8w\" (UniqueName: \"kubernetes.io/projected/6d644fcc-6653-41e6-835d-430f31694bd1-kube-api-access-gvd8w\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.290612 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.387881 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.920225 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw"] Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.922925 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:32:24 crc kubenswrapper[4719]: I1124 09:32:24.969761 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" event={"ID":"6d644fcc-6653-41e6-835d-430f31694bd1","Type":"ContainerStarted","Data":"52be6a1d1e4c1112e49df52f27743b85122b9e2c57bf8ea322ec943ed1c5fd80"} Nov 24 09:32:25 crc kubenswrapper[4719]: I1124 09:32:25.978927 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" event={"ID":"6d644fcc-6653-41e6-835d-430f31694bd1","Type":"ContainerStarted","Data":"80bfb5c38789a4b0aef0182dcf25898e5ca3f35e6f8836f0f865121e5e09f804"} Nov 24 09:32:25 crc kubenswrapper[4719]: I1124 09:32:25.998612 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" podStartSLOduration=1.2865980289999999 podStartE2EDuration="1.998593507s" podCreationTimestamp="2025-11-24 09:32:24 +0000 UTC" firstStartedPulling="2025-11-24 09:32:24.922682978 +0000 UTC m=+2321.253956230" lastFinishedPulling="2025-11-24 09:32:25.634678466 +0000 UTC m=+2321.965951708" observedRunningTime="2025-11-24 09:32:25.994688455 +0000 UTC m=+2322.325961717" watchObservedRunningTime="2025-11-24 09:32:25.998593507 +0000 UTC m=+2322.329866759" Nov 24 09:32:31 crc kubenswrapper[4719]: I1124 09:32:31.059187 4719 generic.go:334] "Generic (PLEG): container finished" podID="6d644fcc-6653-41e6-835d-430f31694bd1" containerID="80bfb5c38789a4b0aef0182dcf25898e5ca3f35e6f8836f0f865121e5e09f804" exitCode=0 Nov 24 09:32:31 crc kubenswrapper[4719]: I1124 09:32:31.059293 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" event={"ID":"6d644fcc-6653-41e6-835d-430f31694bd1","Type":"ContainerDied","Data":"80bfb5c38789a4b0aef0182dcf25898e5ca3f35e6f8836f0f865121e5e09f804"} Nov 24 09:32:31 crc kubenswrapper[4719]: I1124 09:32:31.520771 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:32:31 crc kubenswrapper[4719]: E1124 09:32:31.521120 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:32:32 crc kubenswrapper[4719]: I1124 09:32:32.455684 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:32 crc kubenswrapper[4719]: I1124 09:32:32.577750 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-ssh-key\") pod \"6d644fcc-6653-41e6-835d-430f31694bd1\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " Nov 24 09:32:32 crc kubenswrapper[4719]: I1124 09:32:32.577873 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvd8w\" (UniqueName: \"kubernetes.io/projected/6d644fcc-6653-41e6-835d-430f31694bd1-kube-api-access-gvd8w\") pod \"6d644fcc-6653-41e6-835d-430f31694bd1\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " Nov 24 09:32:32 crc kubenswrapper[4719]: I1124 09:32:32.577956 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-inventory\") pod \"6d644fcc-6653-41e6-835d-430f31694bd1\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " Nov 24 09:32:32 crc kubenswrapper[4719]: I1124 09:32:32.578105 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-ceph\") pod \"6d644fcc-6653-41e6-835d-430f31694bd1\" (UID: \"6d644fcc-6653-41e6-835d-430f31694bd1\") " Nov 24 09:32:32 crc kubenswrapper[4719]: I1124 09:32:32.583287 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-ceph" (OuterVolumeSpecName: "ceph") pod "6d644fcc-6653-41e6-835d-430f31694bd1" (UID: "6d644fcc-6653-41e6-835d-430f31694bd1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:32:32 crc kubenswrapper[4719]: I1124 09:32:32.583304 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d644fcc-6653-41e6-835d-430f31694bd1-kube-api-access-gvd8w" (OuterVolumeSpecName: "kube-api-access-gvd8w") pod "6d644fcc-6653-41e6-835d-430f31694bd1" (UID: "6d644fcc-6653-41e6-835d-430f31694bd1"). InnerVolumeSpecName "kube-api-access-gvd8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:32:32 crc kubenswrapper[4719]: I1124 09:32:32.601320 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-inventory" (OuterVolumeSpecName: "inventory") pod "6d644fcc-6653-41e6-835d-430f31694bd1" (UID: "6d644fcc-6653-41e6-835d-430f31694bd1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:32:32 crc kubenswrapper[4719]: I1124 09:32:32.611501 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6d644fcc-6653-41e6-835d-430f31694bd1" (UID: "6d644fcc-6653-41e6-835d-430f31694bd1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:32:32 crc kubenswrapper[4719]: I1124 09:32:32.680824 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:32:32 crc kubenswrapper[4719]: I1124 09:32:32.680853 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:32:32 crc kubenswrapper[4719]: I1124 09:32:32.680863 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvd8w\" (UniqueName: \"kubernetes.io/projected/6d644fcc-6653-41e6-835d-430f31694bd1-kube-api-access-gvd8w\") on node \"crc\" DevicePath \"\"" Nov 24 09:32:32 crc kubenswrapper[4719]: I1124 09:32:32.680871 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d644fcc-6653-41e6-835d-430f31694bd1-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.078807 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" event={"ID":"6d644fcc-6653-41e6-835d-430f31694bd1","Type":"ContainerDied","Data":"52be6a1d1e4c1112e49df52f27743b85122b9e2c57bf8ea322ec943ed1c5fd80"} Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.078842 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52be6a1d1e4c1112e49df52f27743b85122b9e2c57bf8ea322ec943ed1c5fd80" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.078844 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.167220 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g"] Nov 24 09:32:33 crc kubenswrapper[4719]: E1124 09:32:33.167641 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d644fcc-6653-41e6-835d-430f31694bd1" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.167663 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d644fcc-6653-41e6-835d-430f31694bd1" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.167904 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d644fcc-6653-41e6-835d-430f31694bd1" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.168694 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.171912 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.172228 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.175907 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.176120 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.178147 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.190970 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g"] Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.290027 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvw2g\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.290131 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvw2g\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.290177 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvw2g\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.290271 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs7wg\" (UniqueName: \"kubernetes.io/projected/b7e3784d-ae59-4dce-9c51-429e2361ee3b-kube-api-access-fs7wg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvw2g\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.392200 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs7wg\" (UniqueName: \"kubernetes.io/projected/b7e3784d-ae59-4dce-9c51-429e2361ee3b-kube-api-access-fs7wg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvw2g\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.392318 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvw2g\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.392342 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvw2g\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.392375 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvw2g\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.395536 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvw2g\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.396324 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvw2g\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.403570 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvw2g\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.416729 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs7wg\" (UniqueName: \"kubernetes.io/projected/b7e3784d-ae59-4dce-9c51-429e2361ee3b-kube-api-access-fs7wg\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dvw2g\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:32:33 crc kubenswrapper[4719]: I1124 09:32:33.500630 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:32:34 crc kubenswrapper[4719]: I1124 09:32:34.031766 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g"] Nov 24 09:32:34 crc kubenswrapper[4719]: I1124 09:32:34.089223 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" event={"ID":"b7e3784d-ae59-4dce-9c51-429e2361ee3b","Type":"ContainerStarted","Data":"86ab1439aa6e1a8bcc1683d3990a788ddf36d72aff303b7ea4d782bb48379133"} Nov 24 09:32:35 crc kubenswrapper[4719]: I1124 09:32:35.098508 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" event={"ID":"b7e3784d-ae59-4dce-9c51-429e2361ee3b","Type":"ContainerStarted","Data":"dbde4ccc66e8aa5491ddb4cb3e756a76b24fef4bfabd9da0a92dd3fec7042320"} Nov 24 09:32:35 crc kubenswrapper[4719]: I1124 09:32:35.117656 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" podStartSLOduration=1.720971652 podStartE2EDuration="2.117635126s" podCreationTimestamp="2025-11-24 09:32:33 +0000 UTC" firstStartedPulling="2025-11-24 09:32:34.03587067 +0000 UTC m=+2330.367143922" lastFinishedPulling="2025-11-24 09:32:34.432534104 +0000 UTC m=+2330.763807396" observedRunningTime="2025-11-24 09:32:35.111256554 +0000 UTC m=+2331.442529816" watchObservedRunningTime="2025-11-24 09:32:35.117635126 +0000 UTC m=+2331.448908378" Nov 24 09:32:43 crc kubenswrapper[4719]: I1124 09:32:43.336965 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rbw6p"] Nov 24 09:32:43 crc kubenswrapper[4719]: I1124 09:32:43.339603 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:43 crc kubenswrapper[4719]: I1124 09:32:43.350625 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbw6p"] Nov 24 09:32:43 crc kubenswrapper[4719]: I1124 09:32:43.480241 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktq9j\" (UniqueName: \"kubernetes.io/projected/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-kube-api-access-ktq9j\") pod \"redhat-marketplace-rbw6p\" (UID: \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\") " pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:43 crc kubenswrapper[4719]: I1124 09:32:43.480411 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-utilities\") pod \"redhat-marketplace-rbw6p\" (UID: \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\") " pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:43 crc kubenswrapper[4719]: I1124 09:32:43.480437 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-catalog-content\") pod \"redhat-marketplace-rbw6p\" (UID: \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\") " pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:43 crc kubenswrapper[4719]: I1124 09:32:43.581780 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-utilities\") pod \"redhat-marketplace-rbw6p\" (UID: \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\") " pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:43 crc kubenswrapper[4719]: I1124 09:32:43.581818 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-catalog-content\") pod \"redhat-marketplace-rbw6p\" (UID: \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\") " pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:43 crc kubenswrapper[4719]: I1124 09:32:43.581893 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktq9j\" (UniqueName: \"kubernetes.io/projected/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-kube-api-access-ktq9j\") pod \"redhat-marketplace-rbw6p\" (UID: \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\") " pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:43 crc kubenswrapper[4719]: I1124 09:32:43.582269 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-utilities\") pod \"redhat-marketplace-rbw6p\" (UID: \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\") " pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:43 crc kubenswrapper[4719]: I1124 09:32:43.582294 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-catalog-content\") pod \"redhat-marketplace-rbw6p\" (UID: \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\") " pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:43 crc kubenswrapper[4719]: I1124 09:32:43.608990 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktq9j\" (UniqueName: \"kubernetes.io/projected/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-kube-api-access-ktq9j\") pod \"redhat-marketplace-rbw6p\" (UID: \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\") " pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:43 crc kubenswrapper[4719]: I1124 09:32:43.661726 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:44 crc kubenswrapper[4719]: I1124 09:32:44.186649 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbw6p"] Nov 24 09:32:45 crc kubenswrapper[4719]: I1124 09:32:45.189307 4719 generic.go:334] "Generic (PLEG): container finished" podID="5b3eeabb-74b2-4c47-96be-ee095a50b6d7" containerID="61add82a46ef88468f152371e2570bc75aa8a526365530f5b1d4ffbd7707340b" exitCode=0 Nov 24 09:32:45 crc kubenswrapper[4719]: I1124 09:32:45.189399 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbw6p" event={"ID":"5b3eeabb-74b2-4c47-96be-ee095a50b6d7","Type":"ContainerDied","Data":"61add82a46ef88468f152371e2570bc75aa8a526365530f5b1d4ffbd7707340b"} Nov 24 09:32:45 crc kubenswrapper[4719]: I1124 09:32:45.189681 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbw6p" event={"ID":"5b3eeabb-74b2-4c47-96be-ee095a50b6d7","Type":"ContainerStarted","Data":"4f3f9ae2cdba7bff48ec5d9b0b46e72432f9cec478de09ba8580fb259b542216"} Nov 24 09:32:45 crc kubenswrapper[4719]: I1124 09:32:45.521624 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:32:45 crc kubenswrapper[4719]: E1124 09:32:45.521903 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:32:46 crc kubenswrapper[4719]: I1124 09:32:46.201353 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbw6p" event={"ID":"5b3eeabb-74b2-4c47-96be-ee095a50b6d7","Type":"ContainerStarted","Data":"330b9464b77ce6c5e10ecc4f023df88727db2c91d80722b354bc2120bb037fe3"} Nov 24 09:32:47 crc kubenswrapper[4719]: I1124 09:32:47.212809 4719 generic.go:334] "Generic (PLEG): container finished" podID="5b3eeabb-74b2-4c47-96be-ee095a50b6d7" containerID="330b9464b77ce6c5e10ecc4f023df88727db2c91d80722b354bc2120bb037fe3" exitCode=0 Nov 24 09:32:47 crc kubenswrapper[4719]: I1124 09:32:47.212915 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbw6p" event={"ID":"5b3eeabb-74b2-4c47-96be-ee095a50b6d7","Type":"ContainerDied","Data":"330b9464b77ce6c5e10ecc4f023df88727db2c91d80722b354bc2120bb037fe3"} Nov 24 09:32:48 crc kubenswrapper[4719]: I1124 09:32:48.223573 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbw6p" event={"ID":"5b3eeabb-74b2-4c47-96be-ee095a50b6d7","Type":"ContainerStarted","Data":"cf82f80e9c14b09a11dc13ccbb3ae17f832f3d918db9b26ef55ce700b02245e3"} Nov 24 09:32:48 crc kubenswrapper[4719]: I1124 09:32:48.249483 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rbw6p" podStartSLOduration=2.847201272 podStartE2EDuration="5.249466353s" podCreationTimestamp="2025-11-24 09:32:43 +0000 UTC" firstStartedPulling="2025-11-24 09:32:45.192482107 +0000 UTC m=+2341.523755369" lastFinishedPulling="2025-11-24 09:32:47.594747188 +0000 UTC m=+2343.926020450" observedRunningTime="2025-11-24 09:32:48.242842144 +0000 UTC m=+2344.574115416" watchObservedRunningTime="2025-11-24 09:32:48.249466353 +0000 UTC m=+2344.580739605" Nov 24 09:32:53 crc kubenswrapper[4719]: I1124 09:32:53.662131 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:53 crc kubenswrapper[4719]: I1124 09:32:53.662541 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:53 crc kubenswrapper[4719]: I1124 09:32:53.713470 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:54 crc kubenswrapper[4719]: I1124 09:32:54.314337 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:54 crc kubenswrapper[4719]: I1124 09:32:54.377782 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbw6p"] Nov 24 09:32:56 crc kubenswrapper[4719]: I1124 09:32:56.284604 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rbw6p" podUID="5b3eeabb-74b2-4c47-96be-ee095a50b6d7" containerName="registry-server" containerID="cri-o://cf82f80e9c14b09a11dc13ccbb3ae17f832f3d918db9b26ef55ce700b02245e3" gracePeriod=2 Nov 24 09:32:56 crc kubenswrapper[4719]: I1124 09:32:56.730423 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:56 crc kubenswrapper[4719]: I1124 09:32:56.822693 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-utilities\") pod \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\" (UID: \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\") " Nov 24 09:32:56 crc kubenswrapper[4719]: I1124 09:32:56.822735 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-catalog-content\") pod \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\" (UID: \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\") " Nov 24 09:32:56 crc kubenswrapper[4719]: I1124 09:32:56.822788 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktq9j\" (UniqueName: \"kubernetes.io/projected/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-kube-api-access-ktq9j\") pod \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\" (UID: \"5b3eeabb-74b2-4c47-96be-ee095a50b6d7\") " Nov 24 09:32:56 crc kubenswrapper[4719]: I1124 09:32:56.823582 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-utilities" (OuterVolumeSpecName: "utilities") pod "5b3eeabb-74b2-4c47-96be-ee095a50b6d7" (UID: "5b3eeabb-74b2-4c47-96be-ee095a50b6d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:32:56 crc kubenswrapper[4719]: I1124 09:32:56.828817 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-kube-api-access-ktq9j" (OuterVolumeSpecName: "kube-api-access-ktq9j") pod "5b3eeabb-74b2-4c47-96be-ee095a50b6d7" (UID: "5b3eeabb-74b2-4c47-96be-ee095a50b6d7"). InnerVolumeSpecName "kube-api-access-ktq9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:32:56 crc kubenswrapper[4719]: I1124 09:32:56.841698 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b3eeabb-74b2-4c47-96be-ee095a50b6d7" (UID: "5b3eeabb-74b2-4c47-96be-ee095a50b6d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:32:56 crc kubenswrapper[4719]: I1124 09:32:56.924518 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:32:56 crc kubenswrapper[4719]: I1124 09:32:56.924560 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:32:56 crc kubenswrapper[4719]: I1124 09:32:56.924576 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktq9j\" (UniqueName: \"kubernetes.io/projected/5b3eeabb-74b2-4c47-96be-ee095a50b6d7-kube-api-access-ktq9j\") on node \"crc\" DevicePath \"\"" Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.296874 4719 generic.go:334] "Generic (PLEG): container finished" podID="5b3eeabb-74b2-4c47-96be-ee095a50b6d7" containerID="cf82f80e9c14b09a11dc13ccbb3ae17f832f3d918db9b26ef55ce700b02245e3" exitCode=0 Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.297003 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbw6p" Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.297046 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbw6p" event={"ID":"5b3eeabb-74b2-4c47-96be-ee095a50b6d7","Type":"ContainerDied","Data":"cf82f80e9c14b09a11dc13ccbb3ae17f832f3d918db9b26ef55ce700b02245e3"} Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.297350 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbw6p" event={"ID":"5b3eeabb-74b2-4c47-96be-ee095a50b6d7","Type":"ContainerDied","Data":"4f3f9ae2cdba7bff48ec5d9b0b46e72432f9cec478de09ba8580fb259b542216"} Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.297394 4719 scope.go:117] "RemoveContainer" containerID="cf82f80e9c14b09a11dc13ccbb3ae17f832f3d918db9b26ef55ce700b02245e3" Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.317142 4719 scope.go:117] "RemoveContainer" containerID="330b9464b77ce6c5e10ecc4f023df88727db2c91d80722b354bc2120bb037fe3" Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.350907 4719 scope.go:117] "RemoveContainer" containerID="61add82a46ef88468f152371e2570bc75aa8a526365530f5b1d4ffbd7707340b" Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.351157 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbw6p"] Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.352580 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbw6p"] Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.381008 4719 scope.go:117] "RemoveContainer" containerID="cf82f80e9c14b09a11dc13ccbb3ae17f832f3d918db9b26ef55ce700b02245e3" Nov 24 09:32:57 crc kubenswrapper[4719]: E1124 09:32:57.381576 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf82f80e9c14b09a11dc13ccbb3ae17f832f3d918db9b26ef55ce700b02245e3\": container with ID starting with cf82f80e9c14b09a11dc13ccbb3ae17f832f3d918db9b26ef55ce700b02245e3 not found: ID does not exist" containerID="cf82f80e9c14b09a11dc13ccbb3ae17f832f3d918db9b26ef55ce700b02245e3" Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.381686 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf82f80e9c14b09a11dc13ccbb3ae17f832f3d918db9b26ef55ce700b02245e3"} err="failed to get container status \"cf82f80e9c14b09a11dc13ccbb3ae17f832f3d918db9b26ef55ce700b02245e3\": rpc error: code = NotFound desc = could not find container \"cf82f80e9c14b09a11dc13ccbb3ae17f832f3d918db9b26ef55ce700b02245e3\": container with ID starting with cf82f80e9c14b09a11dc13ccbb3ae17f832f3d918db9b26ef55ce700b02245e3 not found: ID does not exist" Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.381787 4719 scope.go:117] "RemoveContainer" containerID="330b9464b77ce6c5e10ecc4f023df88727db2c91d80722b354bc2120bb037fe3" Nov 24 09:32:57 crc kubenswrapper[4719]: E1124 09:32:57.382249 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"330b9464b77ce6c5e10ecc4f023df88727db2c91d80722b354bc2120bb037fe3\": container with ID starting with 330b9464b77ce6c5e10ecc4f023df88727db2c91d80722b354bc2120bb037fe3 not found: ID does not exist" containerID="330b9464b77ce6c5e10ecc4f023df88727db2c91d80722b354bc2120bb037fe3" Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.382276 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"330b9464b77ce6c5e10ecc4f023df88727db2c91d80722b354bc2120bb037fe3"} err="failed to get container status \"330b9464b77ce6c5e10ecc4f023df88727db2c91d80722b354bc2120bb037fe3\": rpc error: code = NotFound desc = could not find container \"330b9464b77ce6c5e10ecc4f023df88727db2c91d80722b354bc2120bb037fe3\": container with ID starting with 330b9464b77ce6c5e10ecc4f023df88727db2c91d80722b354bc2120bb037fe3 not found: ID does not exist" Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.382294 4719 scope.go:117] "RemoveContainer" containerID="61add82a46ef88468f152371e2570bc75aa8a526365530f5b1d4ffbd7707340b" Nov 24 09:32:57 crc kubenswrapper[4719]: E1124 09:32:57.384511 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61add82a46ef88468f152371e2570bc75aa8a526365530f5b1d4ffbd7707340b\": container with ID starting with 61add82a46ef88468f152371e2570bc75aa8a526365530f5b1d4ffbd7707340b not found: ID does not exist" containerID="61add82a46ef88468f152371e2570bc75aa8a526365530f5b1d4ffbd7707340b" Nov 24 09:32:57 crc kubenswrapper[4719]: I1124 09:32:57.384601 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61add82a46ef88468f152371e2570bc75aa8a526365530f5b1d4ffbd7707340b"} err="failed to get container status \"61add82a46ef88468f152371e2570bc75aa8a526365530f5b1d4ffbd7707340b\": rpc error: code = NotFound desc = could not find container \"61add82a46ef88468f152371e2570bc75aa8a526365530f5b1d4ffbd7707340b\": container with ID starting with 61add82a46ef88468f152371e2570bc75aa8a526365530f5b1d4ffbd7707340b not found: ID does not exist" Nov 24 09:32:58 crc kubenswrapper[4719]: I1124 09:32:58.538005 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b3eeabb-74b2-4c47-96be-ee095a50b6d7" path="/var/lib/kubelet/pods/5b3eeabb-74b2-4c47-96be-ee095a50b6d7/volumes" Nov 24 09:32:59 crc kubenswrapper[4719]: I1124 09:32:59.520878 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:32:59 crc kubenswrapper[4719]: E1124 09:32:59.521341 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:33:10 crc kubenswrapper[4719]: I1124 09:33:10.521532 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:33:10 crc kubenswrapper[4719]: E1124 09:33:10.522312 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:33:16 crc kubenswrapper[4719]: I1124 09:33:16.440970 4719 generic.go:334] "Generic (PLEG): container finished" podID="b7e3784d-ae59-4dce-9c51-429e2361ee3b" containerID="dbde4ccc66e8aa5491ddb4cb3e756a76b24fef4bfabd9da0a92dd3fec7042320" exitCode=0 Nov 24 09:33:16 crc kubenswrapper[4719]: I1124 09:33:16.441105 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" event={"ID":"b7e3784d-ae59-4dce-9c51-429e2361ee3b","Type":"ContainerDied","Data":"dbde4ccc66e8aa5491ddb4cb3e756a76b24fef4bfabd9da0a92dd3fec7042320"} Nov 24 09:33:17 crc kubenswrapper[4719]: I1124 09:33:17.816299 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:33:17 crc kubenswrapper[4719]: I1124 09:33:17.904518 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-ssh-key\") pod \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " Nov 24 09:33:17 crc kubenswrapper[4719]: I1124 09:33:17.904720 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-ceph\") pod \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " Nov 24 09:33:17 crc kubenswrapper[4719]: I1124 09:33:17.904770 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-inventory\") pod \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " Nov 24 09:33:17 crc kubenswrapper[4719]: I1124 09:33:17.904851 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs7wg\" (UniqueName: \"kubernetes.io/projected/b7e3784d-ae59-4dce-9c51-429e2361ee3b-kube-api-access-fs7wg\") pod \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\" (UID: \"b7e3784d-ae59-4dce-9c51-429e2361ee3b\") " Nov 24 09:33:17 crc kubenswrapper[4719]: I1124 09:33:17.910275 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7e3784d-ae59-4dce-9c51-429e2361ee3b-kube-api-access-fs7wg" (OuterVolumeSpecName: "kube-api-access-fs7wg") pod "b7e3784d-ae59-4dce-9c51-429e2361ee3b" (UID: "b7e3784d-ae59-4dce-9c51-429e2361ee3b"). InnerVolumeSpecName "kube-api-access-fs7wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:33:17 crc kubenswrapper[4719]: I1124 09:33:17.911432 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-ceph" (OuterVolumeSpecName: "ceph") pod "b7e3784d-ae59-4dce-9c51-429e2361ee3b" (UID: "b7e3784d-ae59-4dce-9c51-429e2361ee3b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:33:17 crc kubenswrapper[4719]: I1124 09:33:17.932082 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-inventory" (OuterVolumeSpecName: "inventory") pod "b7e3784d-ae59-4dce-9c51-429e2361ee3b" (UID: "b7e3784d-ae59-4dce-9c51-429e2361ee3b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:33:17 crc kubenswrapper[4719]: I1124 09:33:17.933095 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b7e3784d-ae59-4dce-9c51-429e2361ee3b" (UID: "b7e3784d-ae59-4dce-9c51-429e2361ee3b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.008984 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.009021 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.009049 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7e3784d-ae59-4dce-9c51-429e2361ee3b-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.009062 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs7wg\" (UniqueName: \"kubernetes.io/projected/b7e3784d-ae59-4dce-9c51-429e2361ee3b-kube-api-access-fs7wg\") on node \"crc\" DevicePath \"\"" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.462886 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" event={"ID":"b7e3784d-ae59-4dce-9c51-429e2361ee3b","Type":"ContainerDied","Data":"86ab1439aa6e1a8bcc1683d3990a788ddf36d72aff303b7ea4d782bb48379133"} Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.463183 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86ab1439aa6e1a8bcc1683d3990a788ddf36d72aff303b7ea4d782bb48379133" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.463260 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dvw2g" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.560099 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq"] Nov 24 09:33:18 crc kubenswrapper[4719]: E1124 09:33:18.560510 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3eeabb-74b2-4c47-96be-ee095a50b6d7" containerName="registry-server" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.560532 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3eeabb-74b2-4c47-96be-ee095a50b6d7" containerName="registry-server" Nov 24 09:33:18 crc kubenswrapper[4719]: E1124 09:33:18.560558 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3eeabb-74b2-4c47-96be-ee095a50b6d7" containerName="extract-utilities" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.560567 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3eeabb-74b2-4c47-96be-ee095a50b6d7" containerName="extract-utilities" Nov 24 09:33:18 crc kubenswrapper[4719]: E1124 09:33:18.560585 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3eeabb-74b2-4c47-96be-ee095a50b6d7" containerName="extract-content" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.560592 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3eeabb-74b2-4c47-96be-ee095a50b6d7" containerName="extract-content" Nov 24 09:33:18 crc kubenswrapper[4719]: E1124 09:33:18.560617 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e3784d-ae59-4dce-9c51-429e2361ee3b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.560626 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e3784d-ae59-4dce-9c51-429e2361ee3b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.560825 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3eeabb-74b2-4c47-96be-ee095a50b6d7" containerName="registry-server" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.560846 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7e3784d-ae59-4dce-9c51-429e2361ee3b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.561644 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.564144 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.564298 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.564416 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.564739 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.565020 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.576267 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq"] Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.721683 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd2cf\" (UniqueName: \"kubernetes.io/projected/6d07d001-6f91-4b09-9897-01f55286e015-kube-api-access-rd2cf\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.721754 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.722110 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.722162 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.823834 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd2cf\" (UniqueName: \"kubernetes.io/projected/6d07d001-6f91-4b09-9897-01f55286e015-kube-api-access-rd2cf\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.823914 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.823993 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.824022 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.828585 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.829191 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.830401 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.845180 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd2cf\" (UniqueName: \"kubernetes.io/projected/6d07d001-6f91-4b09-9897-01f55286e015-kube-api-access-rd2cf\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:18 crc kubenswrapper[4719]: I1124 09:33:18.880148 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:19 crc kubenswrapper[4719]: I1124 09:33:19.371411 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq"] Nov 24 09:33:19 crc kubenswrapper[4719]: I1124 09:33:19.476359 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" event={"ID":"6d07d001-6f91-4b09-9897-01f55286e015","Type":"ContainerStarted","Data":"fbdfac53bdd245e32b87999b93c097e26d397cc4aeb41ff6c97ed044d826e3a6"} Nov 24 09:33:20 crc kubenswrapper[4719]: I1124 09:33:20.487751 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" event={"ID":"6d07d001-6f91-4b09-9897-01f55286e015","Type":"ContainerStarted","Data":"a6e02889291c041f4875e637cb58dc00ac47775de58bd41b4d688e4703d04173"} Nov 24 09:33:20 crc kubenswrapper[4719]: I1124 09:33:20.511721 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" podStartSLOduration=2.08291373 podStartE2EDuration="2.511691391s" podCreationTimestamp="2025-11-24 09:33:18 +0000 UTC" firstStartedPulling="2025-11-24 09:33:19.384091197 +0000 UTC m=+2375.715364449" lastFinishedPulling="2025-11-24 09:33:19.812868848 +0000 UTC m=+2376.144142110" observedRunningTime="2025-11-24 09:33:20.505533475 +0000 UTC m=+2376.836806807" watchObservedRunningTime="2025-11-24 09:33:20.511691391 +0000 UTC m=+2376.842964673" Nov 24 09:33:21 crc kubenswrapper[4719]: I1124 09:33:21.531492 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:33:21 crc kubenswrapper[4719]: E1124 09:33:21.531948 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:33:24 crc kubenswrapper[4719]: I1124 09:33:24.521824 4719 generic.go:334] "Generic (PLEG): container finished" podID="6d07d001-6f91-4b09-9897-01f55286e015" containerID="a6e02889291c041f4875e637cb58dc00ac47775de58bd41b4d688e4703d04173" exitCode=0 Nov 24 09:33:24 crc kubenswrapper[4719]: I1124 09:33:24.529860 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" event={"ID":"6d07d001-6f91-4b09-9897-01f55286e015","Type":"ContainerDied","Data":"a6e02889291c041f4875e637cb58dc00ac47775de58bd41b4d688e4703d04173"} Nov 24 09:33:25 crc kubenswrapper[4719]: I1124 09:33:25.942698 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.081967 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-inventory\") pod \"6d07d001-6f91-4b09-9897-01f55286e015\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.082118 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-ssh-key\") pod \"6d07d001-6f91-4b09-9897-01f55286e015\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.082189 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd2cf\" (UniqueName: \"kubernetes.io/projected/6d07d001-6f91-4b09-9897-01f55286e015-kube-api-access-rd2cf\") pod \"6d07d001-6f91-4b09-9897-01f55286e015\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.082250 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-ceph\") pod \"6d07d001-6f91-4b09-9897-01f55286e015\" (UID: \"6d07d001-6f91-4b09-9897-01f55286e015\") " Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.088043 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d07d001-6f91-4b09-9897-01f55286e015-kube-api-access-rd2cf" (OuterVolumeSpecName: "kube-api-access-rd2cf") pod "6d07d001-6f91-4b09-9897-01f55286e015" (UID: "6d07d001-6f91-4b09-9897-01f55286e015"). InnerVolumeSpecName "kube-api-access-rd2cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.088685 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-ceph" (OuterVolumeSpecName: "ceph") pod "6d07d001-6f91-4b09-9897-01f55286e015" (UID: "6d07d001-6f91-4b09-9897-01f55286e015"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.108175 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-inventory" (OuterVolumeSpecName: "inventory") pod "6d07d001-6f91-4b09-9897-01f55286e015" (UID: "6d07d001-6f91-4b09-9897-01f55286e015"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.114272 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6d07d001-6f91-4b09-9897-01f55286e015" (UID: "6d07d001-6f91-4b09-9897-01f55286e015"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.184753 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.184785 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.184795 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd2cf\" (UniqueName: \"kubernetes.io/projected/6d07d001-6f91-4b09-9897-01f55286e015-kube-api-access-rd2cf\") on node \"crc\" DevicePath \"\"" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.184807 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d07d001-6f91-4b09-9897-01f55286e015-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.537161 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" event={"ID":"6d07d001-6f91-4b09-9897-01f55286e015","Type":"ContainerDied","Data":"fbdfac53bdd245e32b87999b93c097e26d397cc4aeb41ff6c97ed044d826e3a6"} Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.537201 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbdfac53bdd245e32b87999b93c097e26d397cc4aeb41ff6c97ed044d826e3a6" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.537224 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.650512 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6"] Nov 24 09:33:26 crc kubenswrapper[4719]: E1124 09:33:26.650931 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d07d001-6f91-4b09-9897-01f55286e015" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.650952 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d07d001-6f91-4b09-9897-01f55286e015" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.651237 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d07d001-6f91-4b09-9897-01f55286e015" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.653067 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.655026 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.655241 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.656270 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.657455 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.658649 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.673832 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6"] Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.794925 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-82qn6\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.795023 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8zc9\" (UniqueName: \"kubernetes.io/projected/9ebf3aed-eec5-4676-9f83-23ea070aa92e-kube-api-access-q8zc9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-82qn6\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.795126 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-82qn6\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.795150 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-82qn6\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.897742 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-82qn6\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.898262 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-82qn6\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.898634 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8zc9\" (UniqueName: \"kubernetes.io/projected/9ebf3aed-eec5-4676-9f83-23ea070aa92e-kube-api-access-q8zc9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-82qn6\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.898985 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-82qn6\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.903914 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-82qn6\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.908519 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-82qn6\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.908864 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-82qn6\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.934676 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8zc9\" (UniqueName: \"kubernetes.io/projected/9ebf3aed-eec5-4676-9f83-23ea070aa92e-kube-api-access-q8zc9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-82qn6\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:33:26 crc kubenswrapper[4719]: I1124 09:33:26.987530 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:33:27 crc kubenswrapper[4719]: I1124 09:33:27.494357 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6"] Nov 24 09:33:27 crc kubenswrapper[4719]: W1124 09:33:27.494434 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ebf3aed_eec5_4676_9f83_23ea070aa92e.slice/crio-37a3f82c1e7cf9fafda37376e17d42fb4af1bd01db858cc8e0d98e14e6ffebc0 WatchSource:0}: Error finding container 37a3f82c1e7cf9fafda37376e17d42fb4af1bd01db858cc8e0d98e14e6ffebc0: Status 404 returned error can't find the container with id 37a3f82c1e7cf9fafda37376e17d42fb4af1bd01db858cc8e0d98e14e6ffebc0 Nov 24 09:33:27 crc kubenswrapper[4719]: I1124 09:33:27.546713 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" event={"ID":"9ebf3aed-eec5-4676-9f83-23ea070aa92e","Type":"ContainerStarted","Data":"37a3f82c1e7cf9fafda37376e17d42fb4af1bd01db858cc8e0d98e14e6ffebc0"} Nov 24 09:33:28 crc kubenswrapper[4719]: I1124 09:33:28.555216 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" event={"ID":"9ebf3aed-eec5-4676-9f83-23ea070aa92e","Type":"ContainerStarted","Data":"fc0745aa104659823b25f60039235c035e3e35386033a8ec6e9212951814ccd1"} Nov 24 09:33:28 crc kubenswrapper[4719]: I1124 09:33:28.577015 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" podStartSLOduration=2.198986651 podStartE2EDuration="2.576990053s" podCreationTimestamp="2025-11-24 09:33:26 +0000 UTC" firstStartedPulling="2025-11-24 09:33:27.497907834 +0000 UTC m=+2383.829181076" lastFinishedPulling="2025-11-24 09:33:27.875911226 +0000 UTC m=+2384.207184478" observedRunningTime="2025-11-24 09:33:28.570083896 +0000 UTC m=+2384.901357148" watchObservedRunningTime="2025-11-24 09:33:28.576990053 +0000 UTC m=+2384.908263315" Nov 24 09:33:32 crc kubenswrapper[4719]: I1124 09:33:32.520772 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:33:32 crc kubenswrapper[4719]: E1124 09:33:32.521484 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:33:46 crc kubenswrapper[4719]: I1124 09:33:46.525011 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:33:46 crc kubenswrapper[4719]: E1124 09:33:46.527155 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:34:00 crc kubenswrapper[4719]: I1124 09:34:00.520860 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:34:00 crc kubenswrapper[4719]: E1124 09:34:00.521779 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:34:11 crc kubenswrapper[4719]: I1124 09:34:11.521096 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:34:11 crc kubenswrapper[4719]: E1124 09:34:11.521929 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:34:17 crc kubenswrapper[4719]: I1124 09:34:17.128724 4719 generic.go:334] "Generic (PLEG): container finished" podID="9ebf3aed-eec5-4676-9f83-23ea070aa92e" containerID="fc0745aa104659823b25f60039235c035e3e35386033a8ec6e9212951814ccd1" exitCode=0 Nov 24 09:34:17 crc kubenswrapper[4719]: I1124 09:34:17.128830 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" event={"ID":"9ebf3aed-eec5-4676-9f83-23ea070aa92e","Type":"ContainerDied","Data":"fc0745aa104659823b25f60039235c035e3e35386033a8ec6e9212951814ccd1"} Nov 24 09:34:18 crc kubenswrapper[4719]: I1124 09:34:18.552583 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:34:18 crc kubenswrapper[4719]: I1124 09:34:18.558537 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-ssh-key\") pod \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " Nov 24 09:34:18 crc kubenswrapper[4719]: I1124 09:34:18.558608 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-inventory\") pod \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " Nov 24 09:34:18 crc kubenswrapper[4719]: I1124 09:34:18.559610 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8zc9\" (UniqueName: \"kubernetes.io/projected/9ebf3aed-eec5-4676-9f83-23ea070aa92e-kube-api-access-q8zc9\") pod \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " Nov 24 09:34:18 crc kubenswrapper[4719]: I1124 09:34:18.559665 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-ceph\") pod \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\" (UID: \"9ebf3aed-eec5-4676-9f83-23ea070aa92e\") " Nov 24 09:34:18 crc kubenswrapper[4719]: I1124 09:34:18.564493 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ebf3aed-eec5-4676-9f83-23ea070aa92e-kube-api-access-q8zc9" (OuterVolumeSpecName: "kube-api-access-q8zc9") pod "9ebf3aed-eec5-4676-9f83-23ea070aa92e" (UID: "9ebf3aed-eec5-4676-9f83-23ea070aa92e"). InnerVolumeSpecName "kube-api-access-q8zc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:34:18 crc kubenswrapper[4719]: I1124 09:34:18.583252 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-ceph" (OuterVolumeSpecName: "ceph") pod "9ebf3aed-eec5-4676-9f83-23ea070aa92e" (UID: "9ebf3aed-eec5-4676-9f83-23ea070aa92e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:18 crc kubenswrapper[4719]: I1124 09:34:18.601936 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9ebf3aed-eec5-4676-9f83-23ea070aa92e" (UID: "9ebf3aed-eec5-4676-9f83-23ea070aa92e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:18 crc kubenswrapper[4719]: I1124 09:34:18.610838 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-inventory" (OuterVolumeSpecName: "inventory") pod "9ebf3aed-eec5-4676-9f83-23ea070aa92e" (UID: "9ebf3aed-eec5-4676-9f83-23ea070aa92e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:18 crc kubenswrapper[4719]: I1124 09:34:18.660692 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8zc9\" (UniqueName: \"kubernetes.io/projected/9ebf3aed-eec5-4676-9f83-23ea070aa92e-kube-api-access-q8zc9\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:18 crc kubenswrapper[4719]: I1124 09:34:18.660728 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:18 crc kubenswrapper[4719]: I1124 09:34:18.660743 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:18 crc kubenswrapper[4719]: I1124 09:34:18.660755 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ebf3aed-eec5-4676-9f83-23ea070aa92e-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.145710 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" event={"ID":"9ebf3aed-eec5-4676-9f83-23ea070aa92e","Type":"ContainerDied","Data":"37a3f82c1e7cf9fafda37376e17d42fb4af1bd01db858cc8e0d98e14e6ffebc0"} Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.145751 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37a3f82c1e7cf9fafda37376e17d42fb4af1bd01db858cc8e0d98e14e6ffebc0" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.145803 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-82qn6" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.253138 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5pzbb"] Nov 24 09:34:19 crc kubenswrapper[4719]: E1124 09:34:19.253488 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ebf3aed-eec5-4676-9f83-23ea070aa92e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.253505 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ebf3aed-eec5-4676-9f83-23ea070aa92e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.253680 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ebf3aed-eec5-4676-9f83-23ea070aa92e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.254265 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.258661 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.258657 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.258736 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.258862 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.259181 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.272422 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5pzbb"] Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.375253 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5pzbb\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.375492 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5292\" (UniqueName: \"kubernetes.io/projected/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-kube-api-access-t5292\") pod \"ssh-known-hosts-edpm-deployment-5pzbb\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.375673 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5pzbb\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.375709 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-ceph\") pod \"ssh-known-hosts-edpm-deployment-5pzbb\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.477874 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5pzbb\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.478001 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5292\" (UniqueName: \"kubernetes.io/projected/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-kube-api-access-t5292\") pod \"ssh-known-hosts-edpm-deployment-5pzbb\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.478145 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5pzbb\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.478185 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-ceph\") pod \"ssh-known-hosts-edpm-deployment-5pzbb\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.487916 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5pzbb\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.488949 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5pzbb\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.490114 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-ceph\") pod \"ssh-known-hosts-edpm-deployment-5pzbb\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.499791 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5292\" (UniqueName: \"kubernetes.io/projected/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-kube-api-access-t5292\") pod \"ssh-known-hosts-edpm-deployment-5pzbb\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:19 crc kubenswrapper[4719]: I1124 09:34:19.634270 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:20 crc kubenswrapper[4719]: I1124 09:34:20.170982 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5pzbb"] Nov 24 09:34:21 crc kubenswrapper[4719]: I1124 09:34:21.159941 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" event={"ID":"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372","Type":"ContainerStarted","Data":"d4d451cfa10c9d2feae77700f73bd49226ffaaa1ce728d06ac5af18b38fb0e08"} Nov 24 09:34:21 crc kubenswrapper[4719]: I1124 09:34:21.160233 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" event={"ID":"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372","Type":"ContainerStarted","Data":"7c2c971196b29aa28ab2253c93da178b8c827fb95026513b025ad106aca1e56e"} Nov 24 09:34:21 crc kubenswrapper[4719]: I1124 09:34:21.183396 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" podStartSLOduration=1.7444545900000001 podStartE2EDuration="2.18337632s" podCreationTimestamp="2025-11-24 09:34:19 +0000 UTC" firstStartedPulling="2025-11-24 09:34:20.184018664 +0000 UTC m=+2436.515291916" lastFinishedPulling="2025-11-24 09:34:20.622940394 +0000 UTC m=+2436.954213646" observedRunningTime="2025-11-24 09:34:21.179399146 +0000 UTC m=+2437.510672418" watchObservedRunningTime="2025-11-24 09:34:21.18337632 +0000 UTC m=+2437.514649572" Nov 24 09:34:22 crc kubenswrapper[4719]: I1124 09:34:22.521213 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:34:22 crc kubenswrapper[4719]: E1124 09:34:22.522012 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:34:30 crc kubenswrapper[4719]: I1124 09:34:30.235662 4719 generic.go:334] "Generic (PLEG): container finished" podID="d2a2f001-9ea9-45a6-a2c6-6beb9de6b372" containerID="d4d451cfa10c9d2feae77700f73bd49226ffaaa1ce728d06ac5af18b38fb0e08" exitCode=0 Nov 24 09:34:30 crc kubenswrapper[4719]: I1124 09:34:30.235766 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" event={"ID":"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372","Type":"ContainerDied","Data":"d4d451cfa10c9d2feae77700f73bd49226ffaaa1ce728d06ac5af18b38fb0e08"} Nov 24 09:34:31 crc kubenswrapper[4719]: I1124 09:34:31.609693 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:31 crc kubenswrapper[4719]: I1124 09:34:31.702524 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-ceph\") pod \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " Nov 24 09:34:31 crc kubenswrapper[4719]: I1124 09:34:31.702613 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-ssh-key-openstack-edpm-ipam\") pod \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " Nov 24 09:34:31 crc kubenswrapper[4719]: I1124 09:34:31.702649 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5292\" (UniqueName: \"kubernetes.io/projected/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-kube-api-access-t5292\") pod \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " Nov 24 09:34:31 crc kubenswrapper[4719]: I1124 09:34:31.703560 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-inventory-0\") pod \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\" (UID: \"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372\") " Nov 24 09:34:31 crc kubenswrapper[4719]: I1124 09:34:31.715272 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-ceph" (OuterVolumeSpecName: "ceph") pod "d2a2f001-9ea9-45a6-a2c6-6beb9de6b372" (UID: "d2a2f001-9ea9-45a6-a2c6-6beb9de6b372"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:31 crc kubenswrapper[4719]: I1124 09:34:31.716028 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-kube-api-access-t5292" (OuterVolumeSpecName: "kube-api-access-t5292") pod "d2a2f001-9ea9-45a6-a2c6-6beb9de6b372" (UID: "d2a2f001-9ea9-45a6-a2c6-6beb9de6b372"). InnerVolumeSpecName "kube-api-access-t5292". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:34:31 crc kubenswrapper[4719]: I1124 09:34:31.729236 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d2a2f001-9ea9-45a6-a2c6-6beb9de6b372" (UID: "d2a2f001-9ea9-45a6-a2c6-6beb9de6b372"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:31 crc kubenswrapper[4719]: I1124 09:34:31.738211 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d2a2f001-9ea9-45a6-a2c6-6beb9de6b372" (UID: "d2a2f001-9ea9-45a6-a2c6-6beb9de6b372"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:31 crc kubenswrapper[4719]: I1124 09:34:31.807133 4719 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:31 crc kubenswrapper[4719]: I1124 09:34:31.807332 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:31 crc kubenswrapper[4719]: I1124 09:34:31.807390 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:31 crc kubenswrapper[4719]: I1124 09:34:31.807461 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5292\" (UniqueName: \"kubernetes.io/projected/d2a2f001-9ea9-45a6-a2c6-6beb9de6b372-kube-api-access-t5292\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.254068 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" event={"ID":"d2a2f001-9ea9-45a6-a2c6-6beb9de6b372","Type":"ContainerDied","Data":"7c2c971196b29aa28ab2253c93da178b8c827fb95026513b025ad106aca1e56e"} Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.254130 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c2c971196b29aa28ab2253c93da178b8c827fb95026513b025ad106aca1e56e" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.254134 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5pzbb" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.349566 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj"] Nov 24 09:34:32 crc kubenswrapper[4719]: E1124 09:34:32.350170 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a2f001-9ea9-45a6-a2c6-6beb9de6b372" containerName="ssh-known-hosts-edpm-deployment" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.350269 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a2f001-9ea9-45a6-a2c6-6beb9de6b372" containerName="ssh-known-hosts-edpm-deployment" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.350522 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a2f001-9ea9-45a6-a2c6-6beb9de6b372" containerName="ssh-known-hosts-edpm-deployment" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.351213 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.353715 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.354793 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.354921 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.355381 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.356557 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.366675 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj"] Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.522178 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gj9mj\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.522300 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhtjd\" (UniqueName: \"kubernetes.io/projected/f686dd59-557a-4156-bf11-a0face9d15ea-kube-api-access-mhtjd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gj9mj\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.522494 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gj9mj\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.522571 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gj9mj\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.624358 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gj9mj\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.624503 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhtjd\" (UniqueName: \"kubernetes.io/projected/f686dd59-557a-4156-bf11-a0face9d15ea-kube-api-access-mhtjd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gj9mj\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.624679 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gj9mj\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.624738 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gj9mj\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.628778 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gj9mj\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.628821 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gj9mj\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.629408 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gj9mj\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.645638 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhtjd\" (UniqueName: \"kubernetes.io/projected/f686dd59-557a-4156-bf11-a0face9d15ea-kube-api-access-mhtjd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gj9mj\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:32 crc kubenswrapper[4719]: I1124 09:34:32.671097 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:33 crc kubenswrapper[4719]: I1124 09:34:33.207254 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj"] Nov 24 09:34:33 crc kubenswrapper[4719]: W1124 09:34:33.215336 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf686dd59_557a_4156_bf11_a0face9d15ea.slice/crio-16cf7f6d081d020655a979e99790cb325ca82f68f99d1e1bed68b44799d5f1ee WatchSource:0}: Error finding container 16cf7f6d081d020655a979e99790cb325ca82f68f99d1e1bed68b44799d5f1ee: Status 404 returned error can't find the container with id 16cf7f6d081d020655a979e99790cb325ca82f68f99d1e1bed68b44799d5f1ee Nov 24 09:34:33 crc kubenswrapper[4719]: I1124 09:34:33.263926 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" event={"ID":"f686dd59-557a-4156-bf11-a0face9d15ea","Type":"ContainerStarted","Data":"16cf7f6d081d020655a979e99790cb325ca82f68f99d1e1bed68b44799d5f1ee"} Nov 24 09:34:33 crc kubenswrapper[4719]: I1124 09:34:33.521238 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:34:33 crc kubenswrapper[4719]: E1124 09:34:33.521461 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:34:34 crc kubenswrapper[4719]: I1124 09:34:34.273771 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" event={"ID":"f686dd59-557a-4156-bf11-a0face9d15ea","Type":"ContainerStarted","Data":"43229d31f04cc166f190132413e762f13a57ddac0e8a1569ace73595db32dae7"} Nov 24 09:34:34 crc kubenswrapper[4719]: I1124 09:34:34.293281 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" podStartSLOduration=1.7408490730000001 podStartE2EDuration="2.29326284s" podCreationTimestamp="2025-11-24 09:34:32 +0000 UTC" firstStartedPulling="2025-11-24 09:34:33.21832403 +0000 UTC m=+2449.549597282" lastFinishedPulling="2025-11-24 09:34:33.770737797 +0000 UTC m=+2450.102011049" observedRunningTime="2025-11-24 09:34:34.288025471 +0000 UTC m=+2450.619298723" watchObservedRunningTime="2025-11-24 09:34:34.29326284 +0000 UTC m=+2450.624536082" Nov 24 09:34:42 crc kubenswrapper[4719]: I1124 09:34:42.340981 4719 generic.go:334] "Generic (PLEG): container finished" podID="f686dd59-557a-4156-bf11-a0face9d15ea" containerID="43229d31f04cc166f190132413e762f13a57ddac0e8a1569ace73595db32dae7" exitCode=0 Nov 24 09:34:42 crc kubenswrapper[4719]: I1124 09:34:42.341055 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" event={"ID":"f686dd59-557a-4156-bf11-a0face9d15ea","Type":"ContainerDied","Data":"43229d31f04cc166f190132413e762f13a57ddac0e8a1569ace73595db32dae7"} Nov 24 09:34:43 crc kubenswrapper[4719]: I1124 09:34:43.784819 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:43 crc kubenswrapper[4719]: I1124 09:34:43.837265 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-ceph\") pod \"f686dd59-557a-4156-bf11-a0face9d15ea\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " Nov 24 09:34:43 crc kubenswrapper[4719]: I1124 09:34:43.837353 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhtjd\" (UniqueName: \"kubernetes.io/projected/f686dd59-557a-4156-bf11-a0face9d15ea-kube-api-access-mhtjd\") pod \"f686dd59-557a-4156-bf11-a0face9d15ea\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " Nov 24 09:34:43 crc kubenswrapper[4719]: I1124 09:34:43.837460 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-inventory\") pod \"f686dd59-557a-4156-bf11-a0face9d15ea\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " Nov 24 09:34:43 crc kubenswrapper[4719]: I1124 09:34:43.837488 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-ssh-key\") pod \"f686dd59-557a-4156-bf11-a0face9d15ea\" (UID: \"f686dd59-557a-4156-bf11-a0face9d15ea\") " Nov 24 09:34:43 crc kubenswrapper[4719]: I1124 09:34:43.842535 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f686dd59-557a-4156-bf11-a0face9d15ea-kube-api-access-mhtjd" (OuterVolumeSpecName: "kube-api-access-mhtjd") pod "f686dd59-557a-4156-bf11-a0face9d15ea" (UID: "f686dd59-557a-4156-bf11-a0face9d15ea"). InnerVolumeSpecName "kube-api-access-mhtjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:34:43 crc kubenswrapper[4719]: I1124 09:34:43.847153 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-ceph" (OuterVolumeSpecName: "ceph") pod "f686dd59-557a-4156-bf11-a0face9d15ea" (UID: "f686dd59-557a-4156-bf11-a0face9d15ea"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:43 crc kubenswrapper[4719]: I1124 09:34:43.862048 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-inventory" (OuterVolumeSpecName: "inventory") pod "f686dd59-557a-4156-bf11-a0face9d15ea" (UID: "f686dd59-557a-4156-bf11-a0face9d15ea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:43 crc kubenswrapper[4719]: I1124 09:34:43.864727 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f686dd59-557a-4156-bf11-a0face9d15ea" (UID: "f686dd59-557a-4156-bf11-a0face9d15ea"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:43 crc kubenswrapper[4719]: I1124 09:34:43.940187 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:43 crc kubenswrapper[4719]: I1124 09:34:43.940226 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhtjd\" (UniqueName: \"kubernetes.io/projected/f686dd59-557a-4156-bf11-a0face9d15ea-kube-api-access-mhtjd\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:43 crc kubenswrapper[4719]: I1124 09:34:43.940237 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:43 crc kubenswrapper[4719]: I1124 09:34:43.940246 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f686dd59-557a-4156-bf11-a0face9d15ea-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.359890 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" event={"ID":"f686dd59-557a-4156-bf11-a0face9d15ea","Type":"ContainerDied","Data":"16cf7f6d081d020655a979e99790cb325ca82f68f99d1e1bed68b44799d5f1ee"} Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.359933 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16cf7f6d081d020655a979e99790cb325ca82f68f99d1e1bed68b44799d5f1ee" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.359968 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gj9mj" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.444394 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784"] Nov 24 09:34:44 crc kubenswrapper[4719]: E1124 09:34:44.444939 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f686dd59-557a-4156-bf11-a0face9d15ea" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.444971 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="f686dd59-557a-4156-bf11-a0face9d15ea" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.445301 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="f686dd59-557a-4156-bf11-a0face9d15ea" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.446064 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.449265 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.449581 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.450001 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.454422 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.454583 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.459539 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784"] Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.551882 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vb784\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.551951 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vb784\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.552029 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vb784\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.552197 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hbkw\" (UniqueName: \"kubernetes.io/projected/6aca06db-5628-433e-a1f4-f603fa8ece51-kube-api-access-2hbkw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vb784\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.654114 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vb784\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.654755 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vb784\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.654947 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hbkw\" (UniqueName: \"kubernetes.io/projected/6aca06db-5628-433e-a1f4-f603fa8ece51-kube-api-access-2hbkw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vb784\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.655170 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vb784\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.656896 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.659321 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.659565 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.670576 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vb784\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.670948 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vb784\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.671923 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vb784\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.679722 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hbkw\" (UniqueName: \"kubernetes.io/projected/6aca06db-5628-433e-a1f4-f603fa8ece51-kube-api-access-2hbkw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vb784\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.771515 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:34:44 crc kubenswrapper[4719]: I1124 09:34:44.780024 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:45 crc kubenswrapper[4719]: I1124 09:34:45.294259 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784"] Nov 24 09:34:45 crc kubenswrapper[4719]: I1124 09:34:45.369640 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" event={"ID":"6aca06db-5628-433e-a1f4-f603fa8ece51","Type":"ContainerStarted","Data":"813789fca5ef461767b91c2ef5f5cd2a25e57b3154f64abb5877665241981123"} Nov 24 09:34:45 crc kubenswrapper[4719]: I1124 09:34:45.763447 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:34:46 crc kubenswrapper[4719]: I1124 09:34:46.379500 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" event={"ID":"6aca06db-5628-433e-a1f4-f603fa8ece51","Type":"ContainerStarted","Data":"dcaf3fc97ec9e4271c0ff5b41a85e56cae06009344ef31e19411e879fa973b4f"} Nov 24 09:34:47 crc kubenswrapper[4719]: I1124 09:34:47.522108 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:34:47 crc kubenswrapper[4719]: E1124 09:34:47.522602 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:34:56 crc kubenswrapper[4719]: I1124 09:34:56.467784 4719 generic.go:334] "Generic (PLEG): container finished" podID="6aca06db-5628-433e-a1f4-f603fa8ece51" containerID="dcaf3fc97ec9e4271c0ff5b41a85e56cae06009344ef31e19411e879fa973b4f" exitCode=0 Nov 24 09:34:56 crc kubenswrapper[4719]: I1124 09:34:56.467885 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" event={"ID":"6aca06db-5628-433e-a1f4-f603fa8ece51","Type":"ContainerDied","Data":"dcaf3fc97ec9e4271c0ff5b41a85e56cae06009344ef31e19411e879fa973b4f"} Nov 24 09:34:57 crc kubenswrapper[4719]: I1124 09:34:57.869890 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:57 crc kubenswrapper[4719]: I1124 09:34:57.933150 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-inventory\") pod \"6aca06db-5628-433e-a1f4-f603fa8ece51\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " Nov 24 09:34:57 crc kubenswrapper[4719]: I1124 09:34:57.933615 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-ssh-key\") pod \"6aca06db-5628-433e-a1f4-f603fa8ece51\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " Nov 24 09:34:57 crc kubenswrapper[4719]: I1124 09:34:57.933829 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-ceph\") pod \"6aca06db-5628-433e-a1f4-f603fa8ece51\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " Nov 24 09:34:57 crc kubenswrapper[4719]: I1124 09:34:57.934022 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hbkw\" (UniqueName: \"kubernetes.io/projected/6aca06db-5628-433e-a1f4-f603fa8ece51-kube-api-access-2hbkw\") pod \"6aca06db-5628-433e-a1f4-f603fa8ece51\" (UID: \"6aca06db-5628-433e-a1f4-f603fa8ece51\") " Nov 24 09:34:57 crc kubenswrapper[4719]: I1124 09:34:57.940194 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-ceph" (OuterVolumeSpecName: "ceph") pod "6aca06db-5628-433e-a1f4-f603fa8ece51" (UID: "6aca06db-5628-433e-a1f4-f603fa8ece51"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:57 crc kubenswrapper[4719]: I1124 09:34:57.940659 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aca06db-5628-433e-a1f4-f603fa8ece51-kube-api-access-2hbkw" (OuterVolumeSpecName: "kube-api-access-2hbkw") pod "6aca06db-5628-433e-a1f4-f603fa8ece51" (UID: "6aca06db-5628-433e-a1f4-f603fa8ece51"). InnerVolumeSpecName "kube-api-access-2hbkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:34:57 crc kubenswrapper[4719]: I1124 09:34:57.966288 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-inventory" (OuterVolumeSpecName: "inventory") pod "6aca06db-5628-433e-a1f4-f603fa8ece51" (UID: "6aca06db-5628-433e-a1f4-f603fa8ece51"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:57 crc kubenswrapper[4719]: I1124 09:34:57.978947 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6aca06db-5628-433e-a1f4-f603fa8ece51" (UID: "6aca06db-5628-433e-a1f4-f603fa8ece51"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.036300 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hbkw\" (UniqueName: \"kubernetes.io/projected/6aca06db-5628-433e-a1f4-f603fa8ece51-kube-api-access-2hbkw\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.036605 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.036617 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.036630 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6aca06db-5628-433e-a1f4-f603fa8ece51-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.485185 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" event={"ID":"6aca06db-5628-433e-a1f4-f603fa8ece51","Type":"ContainerDied","Data":"813789fca5ef461767b91c2ef5f5cd2a25e57b3154f64abb5877665241981123"} Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.485519 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="813789fca5ef461767b91c2ef5f5cd2a25e57b3154f64abb5877665241981123" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.485263 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vb784" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.522508 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:34:58 crc kubenswrapper[4719]: E1124 09:34:58.523186 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.588217 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86"] Nov 24 09:34:58 crc kubenswrapper[4719]: E1124 09:34:58.588699 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aca06db-5628-433e-a1f4-f603fa8ece51" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.588719 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aca06db-5628-433e-a1f4-f603fa8ece51" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.588973 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aca06db-5628-433e-a1f4-f603fa8ece51" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.589739 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.597793 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.598399 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.598546 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.598411 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.598726 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.599589 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.599787 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.599791 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.607755 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86"] Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.754530 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sxq5\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-kube-api-access-6sxq5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.754580 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.754630 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.754729 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.754761 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.754784 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.754828 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.754851 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.754891 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.754964 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.755005 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.755078 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.755110 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.857130 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.857223 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sxq5\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-kube-api-access-6sxq5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.857249 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.857290 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.857319 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.857337 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.857357 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.857373 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.857393 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.857415 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.857477 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.857495 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.857526 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.861585 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.861719 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.862382 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.864912 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.865126 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.868221 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.868935 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.869407 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.870519 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.872234 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.873795 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.877117 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sxq5\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-kube-api-access-6sxq5\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.878621 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2rf86\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:58 crc kubenswrapper[4719]: I1124 09:34:58.909221 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:34:59 crc kubenswrapper[4719]: I1124 09:34:59.473941 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86"] Nov 24 09:34:59 crc kubenswrapper[4719]: I1124 09:34:59.495118 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" event={"ID":"b1eec709-2c88-4a47-bc8b-51f49cc99053","Type":"ContainerStarted","Data":"287c8d081592f1277a6bbfd503431567d2ba1fec6fd8db456874678d897856c9"} Nov 24 09:35:00 crc kubenswrapper[4719]: I1124 09:35:00.505639 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" event={"ID":"b1eec709-2c88-4a47-bc8b-51f49cc99053","Type":"ContainerStarted","Data":"1ae1bafd2015020636759ccfc8753f3f73095a83df2d41dfed49a7187958716e"} Nov 24 09:35:00 crc kubenswrapper[4719]: I1124 09:35:00.538293 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" podStartSLOduration=1.916127336 podStartE2EDuration="2.538268717s" podCreationTimestamp="2025-11-24 09:34:58 +0000 UTC" firstStartedPulling="2025-11-24 09:34:59.481390599 +0000 UTC m=+2475.812663851" lastFinishedPulling="2025-11-24 09:35:00.10353197 +0000 UTC m=+2476.434805232" observedRunningTime="2025-11-24 09:35:00.532012622 +0000 UTC m=+2476.863285884" watchObservedRunningTime="2025-11-24 09:35:00.538268717 +0000 UTC m=+2476.869541999" Nov 24 09:35:12 crc kubenswrapper[4719]: I1124 09:35:12.521138 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:35:12 crc kubenswrapper[4719]: E1124 09:35:12.522077 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:35:26 crc kubenswrapper[4719]: I1124 09:35:26.521539 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:35:26 crc kubenswrapper[4719]: E1124 09:35:26.522349 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:35:33 crc kubenswrapper[4719]: I1124 09:35:33.779216 4719 generic.go:334] "Generic (PLEG): container finished" podID="b1eec709-2c88-4a47-bc8b-51f49cc99053" containerID="1ae1bafd2015020636759ccfc8753f3f73095a83df2d41dfed49a7187958716e" exitCode=0 Nov 24 09:35:33 crc kubenswrapper[4719]: I1124 09:35:33.779281 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" event={"ID":"b1eec709-2c88-4a47-bc8b-51f49cc99053","Type":"ContainerDied","Data":"1ae1bafd2015020636759ccfc8753f3f73095a83df2d41dfed49a7187958716e"} Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.195453 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.252440 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-nova-combined-ca-bundle\") pod \"b1eec709-2c88-4a47-bc8b-51f49cc99053\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.252577 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ceph\") pod \"b1eec709-2c88-4a47-bc8b-51f49cc99053\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.252610 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sxq5\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-kube-api-access-6sxq5\") pod \"b1eec709-2c88-4a47-bc8b-51f49cc99053\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.252705 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-repo-setup-combined-ca-bundle\") pod \"b1eec709-2c88-4a47-bc8b-51f49cc99053\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.252746 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-libvirt-combined-ca-bundle\") pod \"b1eec709-2c88-4a47-bc8b-51f49cc99053\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.252793 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-inventory\") pod \"b1eec709-2c88-4a47-bc8b-51f49cc99053\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.252853 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ssh-key\") pod \"b1eec709-2c88-4a47-bc8b-51f49cc99053\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.254207 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-ovn-default-certs-0\") pod \"b1eec709-2c88-4a47-bc8b-51f49cc99053\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.254309 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-bootstrap-combined-ca-bundle\") pod \"b1eec709-2c88-4a47-bc8b-51f49cc99053\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.254390 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"b1eec709-2c88-4a47-bc8b-51f49cc99053\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.254491 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-neutron-metadata-combined-ca-bundle\") pod \"b1eec709-2c88-4a47-bc8b-51f49cc99053\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.254590 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ovn-combined-ca-bundle\") pod \"b1eec709-2c88-4a47-bc8b-51f49cc99053\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.254703 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"b1eec709-2c88-4a47-bc8b-51f49cc99053\" (UID: \"b1eec709-2c88-4a47-bc8b-51f49cc99053\") " Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.258146 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b1eec709-2c88-4a47-bc8b-51f49cc99053" (UID: "b1eec709-2c88-4a47-bc8b-51f49cc99053"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.258586 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ceph" (OuterVolumeSpecName: "ceph") pod "b1eec709-2c88-4a47-bc8b-51f49cc99053" (UID: "b1eec709-2c88-4a47-bc8b-51f49cc99053"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.258715 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b1eec709-2c88-4a47-bc8b-51f49cc99053" (UID: "b1eec709-2c88-4a47-bc8b-51f49cc99053"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.259867 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b1eec709-2c88-4a47-bc8b-51f49cc99053" (UID: "b1eec709-2c88-4a47-bc8b-51f49cc99053"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.261175 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "b1eec709-2c88-4a47-bc8b-51f49cc99053" (UID: "b1eec709-2c88-4a47-bc8b-51f49cc99053"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.262141 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b1eec709-2c88-4a47-bc8b-51f49cc99053" (UID: "b1eec709-2c88-4a47-bc8b-51f49cc99053"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.262256 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "b1eec709-2c88-4a47-bc8b-51f49cc99053" (UID: "b1eec709-2c88-4a47-bc8b-51f49cc99053"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.263015 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "b1eec709-2c88-4a47-bc8b-51f49cc99053" (UID: "b1eec709-2c88-4a47-bc8b-51f49cc99053"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.264294 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-kube-api-access-6sxq5" (OuterVolumeSpecName: "kube-api-access-6sxq5") pod "b1eec709-2c88-4a47-bc8b-51f49cc99053" (UID: "b1eec709-2c88-4a47-bc8b-51f49cc99053"). InnerVolumeSpecName "kube-api-access-6sxq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.265420 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b1eec709-2c88-4a47-bc8b-51f49cc99053" (UID: "b1eec709-2c88-4a47-bc8b-51f49cc99053"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.268359 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b1eec709-2c88-4a47-bc8b-51f49cc99053" (UID: "b1eec709-2c88-4a47-bc8b-51f49cc99053"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.288419 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-inventory" (OuterVolumeSpecName: "inventory") pod "b1eec709-2c88-4a47-bc8b-51f49cc99053" (UID: "b1eec709-2c88-4a47-bc8b-51f49cc99053"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.288931 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b1eec709-2c88-4a47-bc8b-51f49cc99053" (UID: "b1eec709-2c88-4a47-bc8b-51f49cc99053"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.357741 4719 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.357822 4719 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.357843 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.357859 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sxq5\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-kube-api-access-6sxq5\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.357910 4719 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.357928 4719 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.357945 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.357988 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.358008 4719 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.358026 4719 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.358090 4719 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b1eec709-2c88-4a47-bc8b-51f49cc99053-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.358109 4719 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.358160 4719 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eec709-2c88-4a47-bc8b-51f49cc99053-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.796920 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" event={"ID":"b1eec709-2c88-4a47-bc8b-51f49cc99053","Type":"ContainerDied","Data":"287c8d081592f1277a6bbfd503431567d2ba1fec6fd8db456874678d897856c9"} Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.796973 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287c8d081592f1277a6bbfd503431567d2ba1fec6fd8db456874678d897856c9" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.797024 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2rf86" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.899373 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr"] Nov 24 09:35:35 crc kubenswrapper[4719]: E1124 09:35:35.900083 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1eec709-2c88-4a47-bc8b-51f49cc99053" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.900101 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1eec709-2c88-4a47-bc8b-51f49cc99053" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.900293 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1eec709-2c88-4a47-bc8b-51f49cc99053" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.900880 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.903592 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.903886 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.904171 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.905742 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.905839 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.936361 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr"] Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.968163 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.968234 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.968449 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:35 crc kubenswrapper[4719]: I1124 09:35:35.968574 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjfwh\" (UniqueName: \"kubernetes.io/projected/1dad4f07-729f-4a99-bc32-62f666007c12-kube-api-access-rjfwh\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:36 crc kubenswrapper[4719]: I1124 09:35:36.070962 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:36 crc kubenswrapper[4719]: I1124 09:35:36.071051 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:36 crc kubenswrapper[4719]: I1124 09:35:36.071086 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:36 crc kubenswrapper[4719]: I1124 09:35:36.071128 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjfwh\" (UniqueName: \"kubernetes.io/projected/1dad4f07-729f-4a99-bc32-62f666007c12-kube-api-access-rjfwh\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:36 crc kubenswrapper[4719]: I1124 09:35:36.075882 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:36 crc kubenswrapper[4719]: I1124 09:35:36.076251 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:36 crc kubenswrapper[4719]: I1124 09:35:36.076776 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:36 crc kubenswrapper[4719]: I1124 09:35:36.088707 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjfwh\" (UniqueName: \"kubernetes.io/projected/1dad4f07-729f-4a99-bc32-62f666007c12-kube-api-access-rjfwh\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:36 crc kubenswrapper[4719]: I1124 09:35:36.235800 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:36 crc kubenswrapper[4719]: I1124 09:35:36.729980 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr"] Nov 24 09:35:36 crc kubenswrapper[4719]: I1124 09:35:36.806410 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" event={"ID":"1dad4f07-729f-4a99-bc32-62f666007c12","Type":"ContainerStarted","Data":"90209ee28172b4043d7e4f492e955efc5cd49e04600ebeecabfd27af7e61ff76"} Nov 24 09:35:37 crc kubenswrapper[4719]: I1124 09:35:37.814900 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" event={"ID":"1dad4f07-729f-4a99-bc32-62f666007c12","Type":"ContainerStarted","Data":"a34419b180d0a6ee6295e6d5b478a09eabde9d3d22729efe15904ef07938cac2"} Nov 24 09:35:37 crc kubenswrapper[4719]: I1124 09:35:37.837774 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" podStartSLOduration=2.047007161 podStartE2EDuration="2.837753469s" podCreationTimestamp="2025-11-24 09:35:35 +0000 UTC" firstStartedPulling="2025-11-24 09:35:36.747433623 +0000 UTC m=+2513.078706885" lastFinishedPulling="2025-11-24 09:35:37.538179941 +0000 UTC m=+2513.869453193" observedRunningTime="2025-11-24 09:35:37.830917468 +0000 UTC m=+2514.162190730" watchObservedRunningTime="2025-11-24 09:35:37.837753469 +0000 UTC m=+2514.169026741" Nov 24 09:35:38 crc kubenswrapper[4719]: I1124 09:35:38.522238 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:35:38 crc kubenswrapper[4719]: E1124 09:35:38.523006 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:35:43 crc kubenswrapper[4719]: I1124 09:35:43.866589 4719 generic.go:334] "Generic (PLEG): container finished" podID="1dad4f07-729f-4a99-bc32-62f666007c12" containerID="a34419b180d0a6ee6295e6d5b478a09eabde9d3d22729efe15904ef07938cac2" exitCode=0 Nov 24 09:35:43 crc kubenswrapper[4719]: I1124 09:35:43.866681 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" event={"ID":"1dad4f07-729f-4a99-bc32-62f666007c12","Type":"ContainerDied","Data":"a34419b180d0a6ee6295e6d5b478a09eabde9d3d22729efe15904ef07938cac2"} Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.273365 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.350657 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-inventory\") pod \"1dad4f07-729f-4a99-bc32-62f666007c12\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.350710 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-ssh-key\") pod \"1dad4f07-729f-4a99-bc32-62f666007c12\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.350761 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjfwh\" (UniqueName: \"kubernetes.io/projected/1dad4f07-729f-4a99-bc32-62f666007c12-kube-api-access-rjfwh\") pod \"1dad4f07-729f-4a99-bc32-62f666007c12\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.350857 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-ceph\") pod \"1dad4f07-729f-4a99-bc32-62f666007c12\" (UID: \"1dad4f07-729f-4a99-bc32-62f666007c12\") " Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.357157 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-ceph" (OuterVolumeSpecName: "ceph") pod "1dad4f07-729f-4a99-bc32-62f666007c12" (UID: "1dad4f07-729f-4a99-bc32-62f666007c12"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.359473 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dad4f07-729f-4a99-bc32-62f666007c12-kube-api-access-rjfwh" (OuterVolumeSpecName: "kube-api-access-rjfwh") pod "1dad4f07-729f-4a99-bc32-62f666007c12" (UID: "1dad4f07-729f-4a99-bc32-62f666007c12"). InnerVolumeSpecName "kube-api-access-rjfwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.379873 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1dad4f07-729f-4a99-bc32-62f666007c12" (UID: "1dad4f07-729f-4a99-bc32-62f666007c12"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.385185 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-inventory" (OuterVolumeSpecName: "inventory") pod "1dad4f07-729f-4a99-bc32-62f666007c12" (UID: "1dad4f07-729f-4a99-bc32-62f666007c12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.452893 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.452949 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.452962 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjfwh\" (UniqueName: \"kubernetes.io/projected/1dad4f07-729f-4a99-bc32-62f666007c12-kube-api-access-rjfwh\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.452974 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1dad4f07-729f-4a99-bc32-62f666007c12-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.885510 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" event={"ID":"1dad4f07-729f-4a99-bc32-62f666007c12","Type":"ContainerDied","Data":"90209ee28172b4043d7e4f492e955efc5cd49e04600ebeecabfd27af7e61ff76"} Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.885544 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90209ee28172b4043d7e4f492e955efc5cd49e04600ebeecabfd27af7e61ff76" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.885580 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.975308 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84"] Nov 24 09:35:45 crc kubenswrapper[4719]: E1124 09:35:45.975671 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dad4f07-729f-4a99-bc32-62f666007c12" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.975686 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dad4f07-729f-4a99-bc32-62f666007c12" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.975865 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dad4f07-729f-4a99-bc32-62f666007c12" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.976523 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.980076 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.980113 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.980123 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.980237 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.980389 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.980570 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:35:45 crc kubenswrapper[4719]: I1124 09:35:45.986925 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84"] Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.061829 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.062132 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.062249 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89zlj\" (UniqueName: \"kubernetes.io/projected/76df25ad-66c3-42d0-8539-b083731a87be-kube-api-access-89zlj\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.062336 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/76df25ad-66c3-42d0-8539-b083731a87be-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.062411 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.062498 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.164366 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.164561 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.164788 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89zlj\" (UniqueName: \"kubernetes.io/projected/76df25ad-66c3-42d0-8539-b083731a87be-kube-api-access-89zlj\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.165733 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/76df25ad-66c3-42d0-8539-b083731a87be-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.165835 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.165894 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.166737 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/76df25ad-66c3-42d0-8539-b083731a87be-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.169808 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.173847 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.174537 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.177919 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.193929 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89zlj\" (UniqueName: \"kubernetes.io/projected/76df25ad-66c3-42d0-8539-b083731a87be-kube-api-access-89zlj\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6kl84\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.304115 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.831883 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84"] Nov 24 09:35:46 crc kubenswrapper[4719]: I1124 09:35:46.905092 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" event={"ID":"76df25ad-66c3-42d0-8539-b083731a87be","Type":"ContainerStarted","Data":"5321788f838812ec5182834a23d79dfae3c4ed44ec5f5557438dfbfd7adee13f"} Nov 24 09:35:47 crc kubenswrapper[4719]: I1124 09:35:47.915141 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" event={"ID":"76df25ad-66c3-42d0-8539-b083731a87be","Type":"ContainerStarted","Data":"427ccf53a920b82cb3c1c7177243f73c1499964d5c0ca5df6f513df74cb1b44c"} Nov 24 09:35:47 crc kubenswrapper[4719]: I1124 09:35:47.932835 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" podStartSLOduration=2.395292791 podStartE2EDuration="2.93281697s" podCreationTimestamp="2025-11-24 09:35:45 +0000 UTC" firstStartedPulling="2025-11-24 09:35:46.844197692 +0000 UTC m=+2523.175470944" lastFinishedPulling="2025-11-24 09:35:47.381721851 +0000 UTC m=+2523.712995123" observedRunningTime="2025-11-24 09:35:47.931773951 +0000 UTC m=+2524.263047223" watchObservedRunningTime="2025-11-24 09:35:47.93281697 +0000 UTC m=+2524.264090242" Nov 24 09:35:52 crc kubenswrapper[4719]: I1124 09:35:52.520597 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:35:52 crc kubenswrapper[4719]: E1124 09:35:52.521582 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:36:06 crc kubenswrapper[4719]: I1124 09:36:06.520768 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:36:06 crc kubenswrapper[4719]: E1124 09:36:06.521691 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:36:21 crc kubenswrapper[4719]: I1124 09:36:21.520614 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:36:21 crc kubenswrapper[4719]: E1124 09:36:21.521446 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:36:32 crc kubenswrapper[4719]: I1124 09:36:32.520547 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:36:32 crc kubenswrapper[4719]: E1124 09:36:32.521374 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:36:45 crc kubenswrapper[4719]: I1124 09:36:45.521127 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:36:45 crc kubenswrapper[4719]: E1124 09:36:45.521779 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:36:59 crc kubenswrapper[4719]: I1124 09:36:59.520255 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:36:59 crc kubenswrapper[4719]: E1124 09:36:59.520868 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:37:08 crc kubenswrapper[4719]: I1124 09:37:08.218274 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-82pbh"] Nov 24 09:37:08 crc kubenswrapper[4719]: I1124 09:37:08.222398 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:37:08 crc kubenswrapper[4719]: I1124 09:37:08.249811 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82pbh"] Nov 24 09:37:08 crc kubenswrapper[4719]: I1124 09:37:08.387493 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-utilities\") pod \"redhat-operators-82pbh\" (UID: \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\") " pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:37:08 crc kubenswrapper[4719]: I1124 09:37:08.387709 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-catalog-content\") pod \"redhat-operators-82pbh\" (UID: \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\") " pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:37:08 crc kubenswrapper[4719]: I1124 09:37:08.387879 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5bhj\" (UniqueName: \"kubernetes.io/projected/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-kube-api-access-c5bhj\") pod \"redhat-operators-82pbh\" (UID: \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\") " pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:37:08 crc kubenswrapper[4719]: I1124 09:37:08.489503 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-utilities\") pod \"redhat-operators-82pbh\" (UID: \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\") " pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:37:08 crc kubenswrapper[4719]: I1124 09:37:08.489656 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-catalog-content\") pod \"redhat-operators-82pbh\" (UID: \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\") " pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:37:08 crc kubenswrapper[4719]: I1124 09:37:08.489725 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5bhj\" (UniqueName: \"kubernetes.io/projected/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-kube-api-access-c5bhj\") pod \"redhat-operators-82pbh\" (UID: \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\") " pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:37:08 crc kubenswrapper[4719]: I1124 09:37:08.490117 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-utilities\") pod \"redhat-operators-82pbh\" (UID: \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\") " pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:37:08 crc kubenswrapper[4719]: I1124 09:37:08.490185 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-catalog-content\") pod \"redhat-operators-82pbh\" (UID: \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\") " pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:37:08 crc kubenswrapper[4719]: I1124 09:37:08.508015 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5bhj\" (UniqueName: \"kubernetes.io/projected/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-kube-api-access-c5bhj\") pod \"redhat-operators-82pbh\" (UID: \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\") " pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:37:08 crc kubenswrapper[4719]: I1124 09:37:08.553480 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:37:09 crc kubenswrapper[4719]: I1124 09:37:09.077879 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82pbh"] Nov 24 09:37:09 crc kubenswrapper[4719]: I1124 09:37:09.576959 4719 generic.go:334] "Generic (PLEG): container finished" podID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerID="2705278ed430104734292d67637964e8cc8867a7352f2de308a96956d1baa480" exitCode=0 Nov 24 09:37:09 crc kubenswrapper[4719]: I1124 09:37:09.577218 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82pbh" event={"ID":"b4bc3db9-5d2d-4511-a303-8c839a6e99ef","Type":"ContainerDied","Data":"2705278ed430104734292d67637964e8cc8867a7352f2de308a96956d1baa480"} Nov 24 09:37:09 crc kubenswrapper[4719]: I1124 09:37:09.577243 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82pbh" event={"ID":"b4bc3db9-5d2d-4511-a303-8c839a6e99ef","Type":"ContainerStarted","Data":"5adb90696758c5e4f80b2fc008cc6ff98481efdaa46c4588aeb4828d66050c14"} Nov 24 09:37:10 crc kubenswrapper[4719]: I1124 09:37:10.587259 4719 generic.go:334] "Generic (PLEG): container finished" podID="76df25ad-66c3-42d0-8539-b083731a87be" containerID="427ccf53a920b82cb3c1c7177243f73c1499964d5c0ca5df6f513df74cb1b44c" exitCode=0 Nov 24 09:37:10 crc kubenswrapper[4719]: I1124 09:37:10.587353 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" event={"ID":"76df25ad-66c3-42d0-8539-b083731a87be","Type":"ContainerDied","Data":"427ccf53a920b82cb3c1c7177243f73c1499964d5c0ca5df6f513df74cb1b44c"} Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.001536 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.154698 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89zlj\" (UniqueName: \"kubernetes.io/projected/76df25ad-66c3-42d0-8539-b083731a87be-kube-api-access-89zlj\") pod \"76df25ad-66c3-42d0-8539-b083731a87be\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.154772 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-inventory\") pod \"76df25ad-66c3-42d0-8539-b083731a87be\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.154814 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ceph\") pod \"76df25ad-66c3-42d0-8539-b083731a87be\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.154849 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/76df25ad-66c3-42d0-8539-b083731a87be-ovncontroller-config-0\") pod \"76df25ad-66c3-42d0-8539-b083731a87be\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.154872 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ovn-combined-ca-bundle\") pod \"76df25ad-66c3-42d0-8539-b083731a87be\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.154984 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ssh-key\") pod \"76df25ad-66c3-42d0-8539-b083731a87be\" (UID: \"76df25ad-66c3-42d0-8539-b083731a87be\") " Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.160149 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76df25ad-66c3-42d0-8539-b083731a87be-kube-api-access-89zlj" (OuterVolumeSpecName: "kube-api-access-89zlj") pod "76df25ad-66c3-42d0-8539-b083731a87be" (UID: "76df25ad-66c3-42d0-8539-b083731a87be"). InnerVolumeSpecName "kube-api-access-89zlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.160270 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ceph" (OuterVolumeSpecName: "ceph") pod "76df25ad-66c3-42d0-8539-b083731a87be" (UID: "76df25ad-66c3-42d0-8539-b083731a87be"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.174017 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "76df25ad-66c3-42d0-8539-b083731a87be" (UID: "76df25ad-66c3-42d0-8539-b083731a87be"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.184087 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "76df25ad-66c3-42d0-8539-b083731a87be" (UID: "76df25ad-66c3-42d0-8539-b083731a87be"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.192426 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-inventory" (OuterVolumeSpecName: "inventory") pod "76df25ad-66c3-42d0-8539-b083731a87be" (UID: "76df25ad-66c3-42d0-8539-b083731a87be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.192773 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76df25ad-66c3-42d0-8539-b083731a87be-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "76df25ad-66c3-42d0-8539-b083731a87be" (UID: "76df25ad-66c3-42d0-8539-b083731a87be"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.257361 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89zlj\" (UniqueName: \"kubernetes.io/projected/76df25ad-66c3-42d0-8539-b083731a87be-kube-api-access-89zlj\") on node \"crc\" DevicePath \"\"" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.257406 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.257418 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.257430 4719 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/76df25ad-66c3-42d0-8539-b083731a87be-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.257444 4719 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.257452 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76df25ad-66c3-42d0-8539-b083731a87be-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.604671 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" event={"ID":"76df25ad-66c3-42d0-8539-b083731a87be","Type":"ContainerDied","Data":"5321788f838812ec5182834a23d79dfae3c4ed44ec5f5557438dfbfd7adee13f"} Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.604707 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5321788f838812ec5182834a23d79dfae3c4ed44ec5f5557438dfbfd7adee13f" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.604779 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6kl84" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.718323 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm"] Nov 24 09:37:12 crc kubenswrapper[4719]: E1124 09:37:12.718999 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76df25ad-66c3-42d0-8539-b083731a87be" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.719022 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="76df25ad-66c3-42d0-8539-b083731a87be" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.719255 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="76df25ad-66c3-42d0-8539-b083731a87be" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.719909 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.727530 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.727674 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.727773 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.727893 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.728128 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.728250 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.728392 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.735217 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm"] Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.870088 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqws4\" (UniqueName: \"kubernetes.io/projected/4e1b3223-80c0-40c5-9f45-833af2ab03be-kube-api-access-fqws4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.870146 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.870215 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.870260 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.870277 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.870307 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.870340 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.972215 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqws4\" (UniqueName: \"kubernetes.io/projected/4e1b3223-80c0-40c5-9f45-833af2ab03be-kube-api-access-fqws4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.972274 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.972364 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.972522 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.972546 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.972594 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.972639 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.985515 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.988831 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.992554 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:12 crc kubenswrapper[4719]: I1124 09:37:12.997520 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:13 crc kubenswrapper[4719]: I1124 09:37:13.005025 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:13 crc kubenswrapper[4719]: I1124 09:37:13.013257 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:13 crc kubenswrapper[4719]: I1124 09:37:13.025755 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqws4\" (UniqueName: \"kubernetes.io/projected/4e1b3223-80c0-40c5-9f45-833af2ab03be-kube-api-access-fqws4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:13 crc kubenswrapper[4719]: I1124 09:37:13.045663 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:37:13 crc kubenswrapper[4719]: I1124 09:37:13.520454 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:37:13 crc kubenswrapper[4719]: I1124 09:37:13.612370 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm"] Nov 24 09:37:13 crc kubenswrapper[4719]: W1124 09:37:13.620118 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e1b3223_80c0_40c5_9f45_833af2ab03be.slice/crio-282faab0568a7548ee3ff536ad0623fb7b9b44d36881d2c3c9efcbfd9c9ffa65 WatchSource:0}: Error finding container 282faab0568a7548ee3ff536ad0623fb7b9b44d36881d2c3c9efcbfd9c9ffa65: Status 404 returned error can't find the container with id 282faab0568a7548ee3ff536ad0623fb7b9b44d36881d2c3c9efcbfd9c9ffa65 Nov 24 09:37:14 crc kubenswrapper[4719]: I1124 09:37:14.633255 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" event={"ID":"4e1b3223-80c0-40c5-9f45-833af2ab03be","Type":"ContainerStarted","Data":"1504ce539276caf970f5480f74c938c20dd82f3dd9d15f66db2649340b1657fb"} Nov 24 09:37:14 crc kubenswrapper[4719]: I1124 09:37:14.634568 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" event={"ID":"4e1b3223-80c0-40c5-9f45-833af2ab03be","Type":"ContainerStarted","Data":"282faab0568a7548ee3ff536ad0623fb7b9b44d36881d2c3c9efcbfd9c9ffa65"} Nov 24 09:37:14 crc kubenswrapper[4719]: I1124 09:37:14.635822 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82pbh" event={"ID":"b4bc3db9-5d2d-4511-a303-8c839a6e99ef","Type":"ContainerStarted","Data":"bb05bd2ab84e5c38181a24cc743e5bd140166965a13d5c3c8ac6c16348fc0d3c"} Nov 24 09:37:14 crc kubenswrapper[4719]: I1124 09:37:14.639134 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"53d911277b59ae9e8329c3f97db6085cdd29d210d4bbd435a73653b6b25bf62a"} Nov 24 09:37:14 crc kubenswrapper[4719]: I1124 09:37:14.650534 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" podStartSLOduration=2.085967877 podStartE2EDuration="2.650519953s" podCreationTimestamp="2025-11-24 09:37:12 +0000 UTC" firstStartedPulling="2025-11-24 09:37:13.623116141 +0000 UTC m=+2609.954389393" lastFinishedPulling="2025-11-24 09:37:14.187668207 +0000 UTC m=+2610.518941469" observedRunningTime="2025-11-24 09:37:14.647579941 +0000 UTC m=+2610.978853213" watchObservedRunningTime="2025-11-24 09:37:14.650519953 +0000 UTC m=+2610.981793205" Nov 24 09:37:19 crc kubenswrapper[4719]: I1124 09:37:19.706828 4719 generic.go:334] "Generic (PLEG): container finished" podID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerID="bb05bd2ab84e5c38181a24cc743e5bd140166965a13d5c3c8ac6c16348fc0d3c" exitCode=0 Nov 24 09:37:19 crc kubenswrapper[4719]: I1124 09:37:19.706906 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82pbh" event={"ID":"b4bc3db9-5d2d-4511-a303-8c839a6e99ef","Type":"ContainerDied","Data":"bb05bd2ab84e5c38181a24cc743e5bd140166965a13d5c3c8ac6c16348fc0d3c"} Nov 24 09:37:25 crc kubenswrapper[4719]: I1124 09:37:25.761792 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82pbh" event={"ID":"b4bc3db9-5d2d-4511-a303-8c839a6e99ef","Type":"ContainerStarted","Data":"924117822f8fb66d17a564f7713f792c53a33f6d3786927a68636fee37e2b68a"} Nov 24 09:37:25 crc kubenswrapper[4719]: I1124 09:37:25.782888 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-82pbh" podStartSLOduration=2.029934511 podStartE2EDuration="17.782866073s" podCreationTimestamp="2025-11-24 09:37:08 +0000 UTC" firstStartedPulling="2025-11-24 09:37:09.578514006 +0000 UTC m=+2605.909787258" lastFinishedPulling="2025-11-24 09:37:25.331445558 +0000 UTC m=+2621.662718820" observedRunningTime="2025-11-24 09:37:25.779890179 +0000 UTC m=+2622.111163451" watchObservedRunningTime="2025-11-24 09:37:25.782866073 +0000 UTC m=+2622.114139325" Nov 24 09:37:28 crc kubenswrapper[4719]: I1124 09:37:28.553921 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:37:28 crc kubenswrapper[4719]: I1124 09:37:28.555427 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:37:29 crc kubenswrapper[4719]: I1124 09:37:29.605316 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82pbh" podUID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerName="registry-server" probeResult="failure" output=< Nov 24 09:37:29 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:37:29 crc kubenswrapper[4719]: > Nov 24 09:37:39 crc kubenswrapper[4719]: I1124 09:37:39.598989 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82pbh" podUID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerName="registry-server" probeResult="failure" output=< Nov 24 09:37:39 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:37:39 crc kubenswrapper[4719]: > Nov 24 09:37:49 crc kubenswrapper[4719]: I1124 09:37:49.607345 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82pbh" podUID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerName="registry-server" probeResult="failure" output=< Nov 24 09:37:49 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:37:49 crc kubenswrapper[4719]: > Nov 24 09:37:59 crc kubenswrapper[4719]: I1124 09:37:59.619983 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82pbh" podUID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerName="registry-server" probeResult="failure" output=< Nov 24 09:37:59 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:37:59 crc kubenswrapper[4719]: > Nov 24 09:38:08 crc kubenswrapper[4719]: I1124 09:38:08.600141 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:38:08 crc kubenswrapper[4719]: I1124 09:38:08.646990 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:38:09 crc kubenswrapper[4719]: I1124 09:38:09.443876 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82pbh"] Nov 24 09:38:10 crc kubenswrapper[4719]: I1124 09:38:10.151179 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-82pbh" podUID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerName="registry-server" containerID="cri-o://924117822f8fb66d17a564f7713f792c53a33f6d3786927a68636fee37e2b68a" gracePeriod=2 Nov 24 09:38:10 crc kubenswrapper[4719]: I1124 09:38:10.577158 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:38:10 crc kubenswrapper[4719]: I1124 09:38:10.662085 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-catalog-content\") pod \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\" (UID: \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\") " Nov 24 09:38:10 crc kubenswrapper[4719]: I1124 09:38:10.662170 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-utilities\") pod \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\" (UID: \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\") " Nov 24 09:38:10 crc kubenswrapper[4719]: I1124 09:38:10.662199 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5bhj\" (UniqueName: \"kubernetes.io/projected/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-kube-api-access-c5bhj\") pod \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\" (UID: \"b4bc3db9-5d2d-4511-a303-8c839a6e99ef\") " Nov 24 09:38:10 crc kubenswrapper[4719]: I1124 09:38:10.663639 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-utilities" (OuterVolumeSpecName: "utilities") pod "b4bc3db9-5d2d-4511-a303-8c839a6e99ef" (UID: "b4bc3db9-5d2d-4511-a303-8c839a6e99ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:38:10 crc kubenswrapper[4719]: I1124 09:38:10.667293 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-kube-api-access-c5bhj" (OuterVolumeSpecName: "kube-api-access-c5bhj") pod "b4bc3db9-5d2d-4511-a303-8c839a6e99ef" (UID: "b4bc3db9-5d2d-4511-a303-8c839a6e99ef"). InnerVolumeSpecName "kube-api-access-c5bhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:38:10 crc kubenswrapper[4719]: I1124 09:38:10.760096 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4bc3db9-5d2d-4511-a303-8c839a6e99ef" (UID: "b4bc3db9-5d2d-4511-a303-8c839a6e99ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:38:10 crc kubenswrapper[4719]: I1124 09:38:10.767415 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:38:10 crc kubenswrapper[4719]: I1124 09:38:10.767473 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:38:10 crc kubenswrapper[4719]: I1124 09:38:10.767493 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5bhj\" (UniqueName: \"kubernetes.io/projected/b4bc3db9-5d2d-4511-a303-8c839a6e99ef-kube-api-access-c5bhj\") on node \"crc\" DevicePath \"\"" Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.160117 4719 generic.go:334] "Generic (PLEG): container finished" podID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerID="924117822f8fb66d17a564f7713f792c53a33f6d3786927a68636fee37e2b68a" exitCode=0 Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.160224 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82pbh" event={"ID":"b4bc3db9-5d2d-4511-a303-8c839a6e99ef","Type":"ContainerDied","Data":"924117822f8fb66d17a564f7713f792c53a33f6d3786927a68636fee37e2b68a"} Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.160300 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82pbh" event={"ID":"b4bc3db9-5d2d-4511-a303-8c839a6e99ef","Type":"ContainerDied","Data":"5adb90696758c5e4f80b2fc008cc6ff98481efdaa46c4588aeb4828d66050c14"} Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.160331 4719 scope.go:117] "RemoveContainer" containerID="924117822f8fb66d17a564f7713f792c53a33f6d3786927a68636fee37e2b68a" Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.160237 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82pbh" Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.208305 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82pbh"] Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.209253 4719 scope.go:117] "RemoveContainer" containerID="bb05bd2ab84e5c38181a24cc743e5bd140166965a13d5c3c8ac6c16348fc0d3c" Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.218234 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-82pbh"] Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.232281 4719 scope.go:117] "RemoveContainer" containerID="2705278ed430104734292d67637964e8cc8867a7352f2de308a96956d1baa480" Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.261099 4719 scope.go:117] "RemoveContainer" containerID="924117822f8fb66d17a564f7713f792c53a33f6d3786927a68636fee37e2b68a" Nov 24 09:38:11 crc kubenswrapper[4719]: E1124 09:38:11.261677 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924117822f8fb66d17a564f7713f792c53a33f6d3786927a68636fee37e2b68a\": container with ID starting with 924117822f8fb66d17a564f7713f792c53a33f6d3786927a68636fee37e2b68a not found: ID does not exist" containerID="924117822f8fb66d17a564f7713f792c53a33f6d3786927a68636fee37e2b68a" Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.261726 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924117822f8fb66d17a564f7713f792c53a33f6d3786927a68636fee37e2b68a"} err="failed to get container status \"924117822f8fb66d17a564f7713f792c53a33f6d3786927a68636fee37e2b68a\": rpc error: code = NotFound desc = could not find container \"924117822f8fb66d17a564f7713f792c53a33f6d3786927a68636fee37e2b68a\": container with ID starting with 924117822f8fb66d17a564f7713f792c53a33f6d3786927a68636fee37e2b68a not found: ID does not exist" Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.261758 4719 scope.go:117] "RemoveContainer" containerID="bb05bd2ab84e5c38181a24cc743e5bd140166965a13d5c3c8ac6c16348fc0d3c" Nov 24 09:38:11 crc kubenswrapper[4719]: E1124 09:38:11.262220 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb05bd2ab84e5c38181a24cc743e5bd140166965a13d5c3c8ac6c16348fc0d3c\": container with ID starting with bb05bd2ab84e5c38181a24cc743e5bd140166965a13d5c3c8ac6c16348fc0d3c not found: ID does not exist" containerID="bb05bd2ab84e5c38181a24cc743e5bd140166965a13d5c3c8ac6c16348fc0d3c" Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.262258 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb05bd2ab84e5c38181a24cc743e5bd140166965a13d5c3c8ac6c16348fc0d3c"} err="failed to get container status \"bb05bd2ab84e5c38181a24cc743e5bd140166965a13d5c3c8ac6c16348fc0d3c\": rpc error: code = NotFound desc = could not find container \"bb05bd2ab84e5c38181a24cc743e5bd140166965a13d5c3c8ac6c16348fc0d3c\": container with ID starting with bb05bd2ab84e5c38181a24cc743e5bd140166965a13d5c3c8ac6c16348fc0d3c not found: ID does not exist" Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.262285 4719 scope.go:117] "RemoveContainer" containerID="2705278ed430104734292d67637964e8cc8867a7352f2de308a96956d1baa480" Nov 24 09:38:11 crc kubenswrapper[4719]: E1124 09:38:11.262527 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2705278ed430104734292d67637964e8cc8867a7352f2de308a96956d1baa480\": container with ID starting with 2705278ed430104734292d67637964e8cc8867a7352f2de308a96956d1baa480 not found: ID does not exist" containerID="2705278ed430104734292d67637964e8cc8867a7352f2de308a96956d1baa480" Nov 24 09:38:11 crc kubenswrapper[4719]: I1124 09:38:11.262554 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2705278ed430104734292d67637964e8cc8867a7352f2de308a96956d1baa480"} err="failed to get container status \"2705278ed430104734292d67637964e8cc8867a7352f2de308a96956d1baa480\": rpc error: code = NotFound desc = could not find container \"2705278ed430104734292d67637964e8cc8867a7352f2de308a96956d1baa480\": container with ID starting with 2705278ed430104734292d67637964e8cc8867a7352f2de308a96956d1baa480 not found: ID does not exist" Nov 24 09:38:12 crc kubenswrapper[4719]: I1124 09:38:12.535607 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" path="/var/lib/kubelet/pods/b4bc3db9-5d2d-4511-a303-8c839a6e99ef/volumes" Nov 24 09:38:20 crc kubenswrapper[4719]: I1124 09:38:20.234608 4719 generic.go:334] "Generic (PLEG): container finished" podID="4e1b3223-80c0-40c5-9f45-833af2ab03be" containerID="1504ce539276caf970f5480f74c938c20dd82f3dd9d15f66db2649340b1657fb" exitCode=0 Nov 24 09:38:20 crc kubenswrapper[4719]: I1124 09:38:20.234651 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" event={"ID":"4e1b3223-80c0-40c5-9f45-833af2ab03be","Type":"ContainerDied","Data":"1504ce539276caf970f5480f74c938c20dd82f3dd9d15f66db2649340b1657fb"} Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.638904 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.684401 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-neutron-metadata-combined-ca-bundle\") pod \"4e1b3223-80c0-40c5-9f45-833af2ab03be\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.684509 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-ceph\") pod \"4e1b3223-80c0-40c5-9f45-833af2ab03be\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.684533 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-ssh-key\") pod \"4e1b3223-80c0-40c5-9f45-833af2ab03be\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.684564 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-neutron-ovn-metadata-agent-neutron-config-0\") pod \"4e1b3223-80c0-40c5-9f45-833af2ab03be\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.684593 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-nova-metadata-neutron-config-0\") pod \"4e1b3223-80c0-40c5-9f45-833af2ab03be\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.684661 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqws4\" (UniqueName: \"kubernetes.io/projected/4e1b3223-80c0-40c5-9f45-833af2ab03be-kube-api-access-fqws4\") pod \"4e1b3223-80c0-40c5-9f45-833af2ab03be\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.684735 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-inventory\") pod \"4e1b3223-80c0-40c5-9f45-833af2ab03be\" (UID: \"4e1b3223-80c0-40c5-9f45-833af2ab03be\") " Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.705451 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4e1b3223-80c0-40c5-9f45-833af2ab03be" (UID: "4e1b3223-80c0-40c5-9f45-833af2ab03be"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.707978 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-ceph" (OuterVolumeSpecName: "ceph") pod "4e1b3223-80c0-40c5-9f45-833af2ab03be" (UID: "4e1b3223-80c0-40c5-9f45-833af2ab03be"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.727248 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1b3223-80c0-40c5-9f45-833af2ab03be-kube-api-access-fqws4" (OuterVolumeSpecName: "kube-api-access-fqws4") pod "4e1b3223-80c0-40c5-9f45-833af2ab03be" (UID: "4e1b3223-80c0-40c5-9f45-833af2ab03be"). InnerVolumeSpecName "kube-api-access-fqws4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.772920 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-inventory" (OuterVolumeSpecName: "inventory") pod "4e1b3223-80c0-40c5-9f45-833af2ab03be" (UID: "4e1b3223-80c0-40c5-9f45-833af2ab03be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.774347 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4e1b3223-80c0-40c5-9f45-833af2ab03be" (UID: "4e1b3223-80c0-40c5-9f45-833af2ab03be"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.803671 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.804268 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.804377 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqws4\" (UniqueName: \"kubernetes.io/projected/4e1b3223-80c0-40c5-9f45-833af2ab03be-kube-api-access-fqws4\") on node \"crc\" DevicePath \"\"" Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.804499 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.804578 4719 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.807205 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "4e1b3223-80c0-40c5-9f45-833af2ab03be" (UID: "4e1b3223-80c0-40c5-9f45-833af2ab03be"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.813374 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "4e1b3223-80c0-40c5-9f45-833af2ab03be" (UID: "4e1b3223-80c0-40c5-9f45-833af2ab03be"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.906307 4719 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:38:21 crc kubenswrapper[4719]: I1124 09:38:21.906357 4719 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4e1b3223-80c0-40c5-9f45-833af2ab03be-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.249984 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" event={"ID":"4e1b3223-80c0-40c5-9f45-833af2ab03be","Type":"ContainerDied","Data":"282faab0568a7548ee3ff536ad0623fb7b9b44d36881d2c3c9efcbfd9c9ffa65"} Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.250018 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="282faab0568a7548ee3ff536ad0623fb7b9b44d36881d2c3c9efcbfd9c9ffa65" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.250028 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.361975 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf"] Nov 24 09:38:22 crc kubenswrapper[4719]: E1124 09:38:22.362369 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerName="extract-utilities" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.362385 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerName="extract-utilities" Nov 24 09:38:22 crc kubenswrapper[4719]: E1124 09:38:22.362404 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerName="registry-server" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.362410 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerName="registry-server" Nov 24 09:38:22 crc kubenswrapper[4719]: E1124 09:38:22.362417 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerName="extract-content" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.362423 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerName="extract-content" Nov 24 09:38:22 crc kubenswrapper[4719]: E1124 09:38:22.362441 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1b3223-80c0-40c5-9f45-833af2ab03be" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.362447 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1b3223-80c0-40c5-9f45-833af2ab03be" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.362630 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1b3223-80c0-40c5-9f45-833af2ab03be" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.362656 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4bc3db9-5d2d-4511-a303-8c839a6e99ef" containerName="registry-server" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.370152 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf"] Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.370251 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.376425 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.376572 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.376675 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.376726 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.376784 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.396388 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.414284 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.414360 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.414415 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.414441 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.414470 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxgv8\" (UniqueName: \"kubernetes.io/projected/e45a8b91-3c8a-4471-852f-d648ddadcf6f-kube-api-access-sxgv8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.414510 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.515655 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.515707 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.515732 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxgv8\" (UniqueName: \"kubernetes.io/projected/e45a8b91-3c8a-4471-852f-d648ddadcf6f-kube-api-access-sxgv8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.515767 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.515819 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.515871 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.520675 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.521308 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.521802 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.523696 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.523851 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.533969 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxgv8\" (UniqueName: \"kubernetes.io/projected/e45a8b91-3c8a-4471-852f-d648ddadcf6f-kube-api-access-sxgv8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:22 crc kubenswrapper[4719]: I1124 09:38:22.697616 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:38:23 crc kubenswrapper[4719]: I1124 09:38:23.225445 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:38:23 crc kubenswrapper[4719]: I1124 09:38:23.228128 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf"] Nov 24 09:38:23 crc kubenswrapper[4719]: I1124 09:38:23.258714 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" event={"ID":"e45a8b91-3c8a-4471-852f-d648ddadcf6f","Type":"ContainerStarted","Data":"b0e7adf5e20dd61821aaf35140fc8aa98ed0c7d2498e813e8421f73c486ded56"} Nov 24 09:38:24 crc kubenswrapper[4719]: I1124 09:38:24.267078 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" event={"ID":"e45a8b91-3c8a-4471-852f-d648ddadcf6f","Type":"ContainerStarted","Data":"b1b8db0a77d88151a8d4cd1f14115b653b5af3aed22c2679542f7b95f57ca493"} Nov 24 09:38:24 crc kubenswrapper[4719]: I1124 09:38:24.295464 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" podStartSLOduration=1.856984811 podStartE2EDuration="2.295437522s" podCreationTimestamp="2025-11-24 09:38:22 +0000 UTC" firstStartedPulling="2025-11-24 09:38:23.225262241 +0000 UTC m=+2679.556535493" lastFinishedPulling="2025-11-24 09:38:23.663714952 +0000 UTC m=+2679.994988204" observedRunningTime="2025-11-24 09:38:24.285808342 +0000 UTC m=+2680.617081604" watchObservedRunningTime="2025-11-24 09:38:24.295437522 +0000 UTC m=+2680.626710784" Nov 24 09:39:00 crc kubenswrapper[4719]: I1124 09:39:00.283272 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-6lqkr" podUID="ce9d612a-d5e7-4ab8-809e-97155ecda8ef" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 09:39:34 crc kubenswrapper[4719]: I1124 09:39:34.561899 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:39:34 crc kubenswrapper[4719]: I1124 09:39:34.562710 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:40:04 crc kubenswrapper[4719]: I1124 09:40:04.562390 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:40:04 crc kubenswrapper[4719]: I1124 09:40:04.562844 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:40:25 crc kubenswrapper[4719]: I1124 09:40:25.407736 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4n9r6"] Nov 24 09:40:25 crc kubenswrapper[4719]: I1124 09:40:25.412003 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:25 crc kubenswrapper[4719]: I1124 09:40:25.424402 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4n9r6"] Nov 24 09:40:25 crc kubenswrapper[4719]: I1124 09:40:25.601734 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7946\" (UniqueName: \"kubernetes.io/projected/872f231c-7a94-4b2c-b426-c68e89765dd4-kube-api-access-t7946\") pod \"certified-operators-4n9r6\" (UID: \"872f231c-7a94-4b2c-b426-c68e89765dd4\") " pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:25 crc kubenswrapper[4719]: I1124 09:40:25.601879 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/872f231c-7a94-4b2c-b426-c68e89765dd4-utilities\") pod \"certified-operators-4n9r6\" (UID: \"872f231c-7a94-4b2c-b426-c68e89765dd4\") " pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:25 crc kubenswrapper[4719]: I1124 09:40:25.601915 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/872f231c-7a94-4b2c-b426-c68e89765dd4-catalog-content\") pod \"certified-operators-4n9r6\" (UID: \"872f231c-7a94-4b2c-b426-c68e89765dd4\") " pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:25 crc kubenswrapper[4719]: I1124 09:40:25.703169 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/872f231c-7a94-4b2c-b426-c68e89765dd4-utilities\") pod \"certified-operators-4n9r6\" (UID: \"872f231c-7a94-4b2c-b426-c68e89765dd4\") " pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:25 crc kubenswrapper[4719]: I1124 09:40:25.703515 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/872f231c-7a94-4b2c-b426-c68e89765dd4-catalog-content\") pod \"certified-operators-4n9r6\" (UID: \"872f231c-7a94-4b2c-b426-c68e89765dd4\") " pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:25 crc kubenswrapper[4719]: I1124 09:40:25.703634 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/872f231c-7a94-4b2c-b426-c68e89765dd4-utilities\") pod \"certified-operators-4n9r6\" (UID: \"872f231c-7a94-4b2c-b426-c68e89765dd4\") " pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:25 crc kubenswrapper[4719]: I1124 09:40:25.703786 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7946\" (UniqueName: \"kubernetes.io/projected/872f231c-7a94-4b2c-b426-c68e89765dd4-kube-api-access-t7946\") pod \"certified-operators-4n9r6\" (UID: \"872f231c-7a94-4b2c-b426-c68e89765dd4\") " pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:25 crc kubenswrapper[4719]: I1124 09:40:25.704056 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/872f231c-7a94-4b2c-b426-c68e89765dd4-catalog-content\") pod \"certified-operators-4n9r6\" (UID: \"872f231c-7a94-4b2c-b426-c68e89765dd4\") " pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:25 crc kubenswrapper[4719]: I1124 09:40:25.735107 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7946\" (UniqueName: \"kubernetes.io/projected/872f231c-7a94-4b2c-b426-c68e89765dd4-kube-api-access-t7946\") pod \"certified-operators-4n9r6\" (UID: \"872f231c-7a94-4b2c-b426-c68e89765dd4\") " pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:25 crc kubenswrapper[4719]: I1124 09:40:25.746960 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:26 crc kubenswrapper[4719]: I1124 09:40:26.288479 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4n9r6"] Nov 24 09:40:26 crc kubenswrapper[4719]: I1124 09:40:26.312135 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4n9r6" event={"ID":"872f231c-7a94-4b2c-b426-c68e89765dd4","Type":"ContainerStarted","Data":"e708ea4e82b16fcf1e25a97fc8fb9b66cf9c3f54af5e3c1e0addeaa6dd8c1d32"} Nov 24 09:40:27 crc kubenswrapper[4719]: I1124 09:40:27.320120 4719 generic.go:334] "Generic (PLEG): container finished" podID="872f231c-7a94-4b2c-b426-c68e89765dd4" containerID="12ea1437b027066b4fa49a94aa80efa197f5c1f8646a8d70a3588c648da3a8fe" exitCode=0 Nov 24 09:40:27 crc kubenswrapper[4719]: I1124 09:40:27.320210 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4n9r6" event={"ID":"872f231c-7a94-4b2c-b426-c68e89765dd4","Type":"ContainerDied","Data":"12ea1437b027066b4fa49a94aa80efa197f5c1f8646a8d70a3588c648da3a8fe"} Nov 24 09:40:28 crc kubenswrapper[4719]: I1124 09:40:28.330206 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4n9r6" event={"ID":"872f231c-7a94-4b2c-b426-c68e89765dd4","Type":"ContainerStarted","Data":"898b4cf86dbc06495f53858b9b2c93d6414aa48ca87c23368b9485ac19d18ce1"} Nov 24 09:40:29 crc kubenswrapper[4719]: I1124 09:40:29.344945 4719 generic.go:334] "Generic (PLEG): container finished" podID="872f231c-7a94-4b2c-b426-c68e89765dd4" containerID="898b4cf86dbc06495f53858b9b2c93d6414aa48ca87c23368b9485ac19d18ce1" exitCode=0 Nov 24 09:40:29 crc kubenswrapper[4719]: I1124 09:40:29.345005 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4n9r6" event={"ID":"872f231c-7a94-4b2c-b426-c68e89765dd4","Type":"ContainerDied","Data":"898b4cf86dbc06495f53858b9b2c93d6414aa48ca87c23368b9485ac19d18ce1"} Nov 24 09:40:30 crc kubenswrapper[4719]: I1124 09:40:30.354489 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4n9r6" event={"ID":"872f231c-7a94-4b2c-b426-c68e89765dd4","Type":"ContainerStarted","Data":"cac183057ec7ae1b66f51c438c2582978f86638811118444f5d4dbe0f5342407"} Nov 24 09:40:30 crc kubenswrapper[4719]: I1124 09:40:30.380975 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4n9r6" podStartSLOduration=2.957966605 podStartE2EDuration="5.38095373s" podCreationTimestamp="2025-11-24 09:40:25 +0000 UTC" firstStartedPulling="2025-11-24 09:40:27.324893038 +0000 UTC m=+2803.656166290" lastFinishedPulling="2025-11-24 09:40:29.747880163 +0000 UTC m=+2806.079153415" observedRunningTime="2025-11-24 09:40:30.371873606 +0000 UTC m=+2806.703146888" watchObservedRunningTime="2025-11-24 09:40:30.38095373 +0000 UTC m=+2806.712226992" Nov 24 09:40:34 crc kubenswrapper[4719]: I1124 09:40:34.562611 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:40:34 crc kubenswrapper[4719]: I1124 09:40:34.564262 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:40:34 crc kubenswrapper[4719]: I1124 09:40:34.564331 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 09:40:34 crc kubenswrapper[4719]: I1124 09:40:34.565131 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53d911277b59ae9e8329c3f97db6085cdd29d210d4bbd435a73653b6b25bf62a"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:40:34 crc kubenswrapper[4719]: I1124 09:40:34.565190 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://53d911277b59ae9e8329c3f97db6085cdd29d210d4bbd435a73653b6b25bf62a" gracePeriod=600 Nov 24 09:40:35 crc kubenswrapper[4719]: I1124 09:40:35.402415 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="53d911277b59ae9e8329c3f97db6085cdd29d210d4bbd435a73653b6b25bf62a" exitCode=0 Nov 24 09:40:35 crc kubenswrapper[4719]: I1124 09:40:35.402486 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"53d911277b59ae9e8329c3f97db6085cdd29d210d4bbd435a73653b6b25bf62a"} Nov 24 09:40:35 crc kubenswrapper[4719]: I1124 09:40:35.402779 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e"} Nov 24 09:40:35 crc kubenswrapper[4719]: I1124 09:40:35.402811 4719 scope.go:117] "RemoveContainer" containerID="9b6825705f87fe7cb12f328bafd827215082ed12ccdd2a82c332ed0335278ed1" Nov 24 09:40:35 crc kubenswrapper[4719]: I1124 09:40:35.747613 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:35 crc kubenswrapper[4719]: I1124 09:40:35.747730 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:35 crc kubenswrapper[4719]: I1124 09:40:35.794783 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:36 crc kubenswrapper[4719]: I1124 09:40:36.457846 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:36 crc kubenswrapper[4719]: I1124 09:40:36.512463 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4n9r6"] Nov 24 09:40:38 crc kubenswrapper[4719]: I1124 09:40:38.427562 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4n9r6" podUID="872f231c-7a94-4b2c-b426-c68e89765dd4" containerName="registry-server" containerID="cri-o://cac183057ec7ae1b66f51c438c2582978f86638811118444f5d4dbe0f5342407" gracePeriod=2 Nov 24 09:40:39 crc kubenswrapper[4719]: I1124 09:40:39.437105 4719 generic.go:334] "Generic (PLEG): container finished" podID="872f231c-7a94-4b2c-b426-c68e89765dd4" containerID="cac183057ec7ae1b66f51c438c2582978f86638811118444f5d4dbe0f5342407" exitCode=0 Nov 24 09:40:39 crc kubenswrapper[4719]: I1124 09:40:39.437162 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4n9r6" event={"ID":"872f231c-7a94-4b2c-b426-c68e89765dd4","Type":"ContainerDied","Data":"cac183057ec7ae1b66f51c438c2582978f86638811118444f5d4dbe0f5342407"} Nov 24 09:40:39 crc kubenswrapper[4719]: I1124 09:40:39.838566 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:39 crc kubenswrapper[4719]: I1124 09:40:39.890825 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/872f231c-7a94-4b2c-b426-c68e89765dd4-catalog-content\") pod \"872f231c-7a94-4b2c-b426-c68e89765dd4\" (UID: \"872f231c-7a94-4b2c-b426-c68e89765dd4\") " Nov 24 09:40:39 crc kubenswrapper[4719]: I1124 09:40:39.890940 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/872f231c-7a94-4b2c-b426-c68e89765dd4-utilities\") pod \"872f231c-7a94-4b2c-b426-c68e89765dd4\" (UID: \"872f231c-7a94-4b2c-b426-c68e89765dd4\") " Nov 24 09:40:39 crc kubenswrapper[4719]: I1124 09:40:39.891157 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7946\" (UniqueName: \"kubernetes.io/projected/872f231c-7a94-4b2c-b426-c68e89765dd4-kube-api-access-t7946\") pod \"872f231c-7a94-4b2c-b426-c68e89765dd4\" (UID: \"872f231c-7a94-4b2c-b426-c68e89765dd4\") " Nov 24 09:40:39 crc kubenswrapper[4719]: I1124 09:40:39.891736 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/872f231c-7a94-4b2c-b426-c68e89765dd4-utilities" (OuterVolumeSpecName: "utilities") pod "872f231c-7a94-4b2c-b426-c68e89765dd4" (UID: "872f231c-7a94-4b2c-b426-c68e89765dd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:40:39 crc kubenswrapper[4719]: I1124 09:40:39.891885 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/872f231c-7a94-4b2c-b426-c68e89765dd4-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:40:39 crc kubenswrapper[4719]: I1124 09:40:39.896823 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872f231c-7a94-4b2c-b426-c68e89765dd4-kube-api-access-t7946" (OuterVolumeSpecName: "kube-api-access-t7946") pod "872f231c-7a94-4b2c-b426-c68e89765dd4" (UID: "872f231c-7a94-4b2c-b426-c68e89765dd4"). InnerVolumeSpecName "kube-api-access-t7946". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:40:39 crc kubenswrapper[4719]: I1124 09:40:39.950094 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/872f231c-7a94-4b2c-b426-c68e89765dd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "872f231c-7a94-4b2c-b426-c68e89765dd4" (UID: "872f231c-7a94-4b2c-b426-c68e89765dd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:40:39 crc kubenswrapper[4719]: I1124 09:40:39.993484 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7946\" (UniqueName: \"kubernetes.io/projected/872f231c-7a94-4b2c-b426-c68e89765dd4-kube-api-access-t7946\") on node \"crc\" DevicePath \"\"" Nov 24 09:40:39 crc kubenswrapper[4719]: I1124 09:40:39.993520 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/872f231c-7a94-4b2c-b426-c68e89765dd4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:40:40 crc kubenswrapper[4719]: I1124 09:40:40.450381 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4n9r6" event={"ID":"872f231c-7a94-4b2c-b426-c68e89765dd4","Type":"ContainerDied","Data":"e708ea4e82b16fcf1e25a97fc8fb9b66cf9c3f54af5e3c1e0addeaa6dd8c1d32"} Nov 24 09:40:40 crc kubenswrapper[4719]: I1124 09:40:40.450423 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4n9r6" Nov 24 09:40:40 crc kubenswrapper[4719]: I1124 09:40:40.450462 4719 scope.go:117] "RemoveContainer" containerID="cac183057ec7ae1b66f51c438c2582978f86638811118444f5d4dbe0f5342407" Nov 24 09:40:40 crc kubenswrapper[4719]: I1124 09:40:40.488637 4719 scope.go:117] "RemoveContainer" containerID="898b4cf86dbc06495f53858b9b2c93d6414aa48ca87c23368b9485ac19d18ce1" Nov 24 09:40:40 crc kubenswrapper[4719]: I1124 09:40:40.509804 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4n9r6"] Nov 24 09:40:40 crc kubenswrapper[4719]: I1124 09:40:40.518885 4719 scope.go:117] "RemoveContainer" containerID="12ea1437b027066b4fa49a94aa80efa197f5c1f8646a8d70a3588c648da3a8fe" Nov 24 09:40:40 crc kubenswrapper[4719]: I1124 09:40:40.534453 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4n9r6"] Nov 24 09:40:42 crc kubenswrapper[4719]: I1124 09:40:42.529353 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="872f231c-7a94-4b2c-b426-c68e89765dd4" path="/var/lib/kubelet/pods/872f231c-7a94-4b2c-b426-c68e89765dd4/volumes" Nov 24 09:42:34 crc kubenswrapper[4719]: I1124 09:42:34.562432 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:42:34 crc kubenswrapper[4719]: I1124 09:42:34.563104 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:42:40 crc kubenswrapper[4719]: I1124 09:42:40.819744 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sj5z6"] Nov 24 09:42:40 crc kubenswrapper[4719]: E1124 09:42:40.827149 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872f231c-7a94-4b2c-b426-c68e89765dd4" containerName="extract-utilities" Nov 24 09:42:40 crc kubenswrapper[4719]: I1124 09:42:40.827201 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="872f231c-7a94-4b2c-b426-c68e89765dd4" containerName="extract-utilities" Nov 24 09:42:40 crc kubenswrapper[4719]: E1124 09:42:40.827243 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872f231c-7a94-4b2c-b426-c68e89765dd4" containerName="extract-content" Nov 24 09:42:40 crc kubenswrapper[4719]: I1124 09:42:40.827254 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="872f231c-7a94-4b2c-b426-c68e89765dd4" containerName="extract-content" Nov 24 09:42:40 crc kubenswrapper[4719]: E1124 09:42:40.827263 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872f231c-7a94-4b2c-b426-c68e89765dd4" containerName="registry-server" Nov 24 09:42:40 crc kubenswrapper[4719]: I1124 09:42:40.827271 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="872f231c-7a94-4b2c-b426-c68e89765dd4" containerName="registry-server" Nov 24 09:42:40 crc kubenswrapper[4719]: I1124 09:42:40.827518 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="872f231c-7a94-4b2c-b426-c68e89765dd4" containerName="registry-server" Nov 24 09:42:40 crc kubenswrapper[4719]: I1124 09:42:40.829341 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:40 crc kubenswrapper[4719]: I1124 09:42:40.835911 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sj5z6"] Nov 24 09:42:40 crc kubenswrapper[4719]: I1124 09:42:40.997286 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/627c95d5-4502-40f9-9d44-12bc566b74a2-catalog-content\") pod \"community-operators-sj5z6\" (UID: \"627c95d5-4502-40f9-9d44-12bc566b74a2\") " pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:40 crc kubenswrapper[4719]: I1124 09:42:40.997634 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/627c95d5-4502-40f9-9d44-12bc566b74a2-utilities\") pod \"community-operators-sj5z6\" (UID: \"627c95d5-4502-40f9-9d44-12bc566b74a2\") " pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:40 crc kubenswrapper[4719]: I1124 09:42:40.997744 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kql6\" (UniqueName: \"kubernetes.io/projected/627c95d5-4502-40f9-9d44-12bc566b74a2-kube-api-access-2kql6\") pod \"community-operators-sj5z6\" (UID: \"627c95d5-4502-40f9-9d44-12bc566b74a2\") " pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:41 crc kubenswrapper[4719]: I1124 09:42:41.100160 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kql6\" (UniqueName: \"kubernetes.io/projected/627c95d5-4502-40f9-9d44-12bc566b74a2-kube-api-access-2kql6\") pod \"community-operators-sj5z6\" (UID: \"627c95d5-4502-40f9-9d44-12bc566b74a2\") " pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:41 crc kubenswrapper[4719]: I1124 09:42:41.100402 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/627c95d5-4502-40f9-9d44-12bc566b74a2-catalog-content\") pod \"community-operators-sj5z6\" (UID: \"627c95d5-4502-40f9-9d44-12bc566b74a2\") " pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:41 crc kubenswrapper[4719]: I1124 09:42:41.100438 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/627c95d5-4502-40f9-9d44-12bc566b74a2-utilities\") pod \"community-operators-sj5z6\" (UID: \"627c95d5-4502-40f9-9d44-12bc566b74a2\") " pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:41 crc kubenswrapper[4719]: I1124 09:42:41.100977 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/627c95d5-4502-40f9-9d44-12bc566b74a2-utilities\") pod \"community-operators-sj5z6\" (UID: \"627c95d5-4502-40f9-9d44-12bc566b74a2\") " pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:41 crc kubenswrapper[4719]: I1124 09:42:41.100990 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/627c95d5-4502-40f9-9d44-12bc566b74a2-catalog-content\") pod \"community-operators-sj5z6\" (UID: \"627c95d5-4502-40f9-9d44-12bc566b74a2\") " pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:41 crc kubenswrapper[4719]: I1124 09:42:41.119406 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kql6\" (UniqueName: \"kubernetes.io/projected/627c95d5-4502-40f9-9d44-12bc566b74a2-kube-api-access-2kql6\") pod \"community-operators-sj5z6\" (UID: \"627c95d5-4502-40f9-9d44-12bc566b74a2\") " pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:41 crc kubenswrapper[4719]: I1124 09:42:41.181443 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:41 crc kubenswrapper[4719]: I1124 09:42:41.762915 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sj5z6"] Nov 24 09:42:42 crc kubenswrapper[4719]: I1124 09:42:42.508429 4719 generic.go:334] "Generic (PLEG): container finished" podID="627c95d5-4502-40f9-9d44-12bc566b74a2" containerID="16c5b1731f6a70911889150d58b0c3c89231ef655ee457cd6603967bc93ebd80" exitCode=0 Nov 24 09:42:42 crc kubenswrapper[4719]: I1124 09:42:42.508981 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj5z6" event={"ID":"627c95d5-4502-40f9-9d44-12bc566b74a2","Type":"ContainerDied","Data":"16c5b1731f6a70911889150d58b0c3c89231ef655ee457cd6603967bc93ebd80"} Nov 24 09:42:42 crc kubenswrapper[4719]: I1124 09:42:42.509011 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj5z6" event={"ID":"627c95d5-4502-40f9-9d44-12bc566b74a2","Type":"ContainerStarted","Data":"f625d085aa69a852012d0cf153f3d492eda0f923311c4d94b7b3429b2f23afa2"} Nov 24 09:42:43 crc kubenswrapper[4719]: I1124 09:42:43.517996 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj5z6" event={"ID":"627c95d5-4502-40f9-9d44-12bc566b74a2","Type":"ContainerStarted","Data":"9f6c7d83b1e594cd2528cc98aea2ae2d3a60b494eda5f49622c2f685c220ba8c"} Nov 24 09:42:45 crc kubenswrapper[4719]: I1124 09:42:45.537658 4719 generic.go:334] "Generic (PLEG): container finished" podID="627c95d5-4502-40f9-9d44-12bc566b74a2" containerID="9f6c7d83b1e594cd2528cc98aea2ae2d3a60b494eda5f49622c2f685c220ba8c" exitCode=0 Nov 24 09:42:45 crc kubenswrapper[4719]: I1124 09:42:45.537774 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj5z6" event={"ID":"627c95d5-4502-40f9-9d44-12bc566b74a2","Type":"ContainerDied","Data":"9f6c7d83b1e594cd2528cc98aea2ae2d3a60b494eda5f49622c2f685c220ba8c"} Nov 24 09:42:46 crc kubenswrapper[4719]: I1124 09:42:46.547795 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj5z6" event={"ID":"627c95d5-4502-40f9-9d44-12bc566b74a2","Type":"ContainerStarted","Data":"2f618b2eabdcb8a29d1bf97fdc3f89405e6c023180fd4b75323c86ab98ffc205"} Nov 24 09:42:46 crc kubenswrapper[4719]: I1124 09:42:46.571886 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sj5z6" podStartSLOduration=3.090945887 podStartE2EDuration="6.57186896s" podCreationTimestamp="2025-11-24 09:42:40 +0000 UTC" firstStartedPulling="2025-11-24 09:42:42.511278207 +0000 UTC m=+2938.842551459" lastFinishedPulling="2025-11-24 09:42:45.99220128 +0000 UTC m=+2942.323474532" observedRunningTime="2025-11-24 09:42:46.571175981 +0000 UTC m=+2942.902449273" watchObservedRunningTime="2025-11-24 09:42:46.57186896 +0000 UTC m=+2942.903142232" Nov 24 09:42:51 crc kubenswrapper[4719]: I1124 09:42:51.182822 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:51 crc kubenswrapper[4719]: I1124 09:42:51.183322 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:51 crc kubenswrapper[4719]: I1124 09:42:51.230406 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:51 crc kubenswrapper[4719]: I1124 09:42:51.654767 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:51 crc kubenswrapper[4719]: I1124 09:42:51.733856 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sj5z6"] Nov 24 09:42:53 crc kubenswrapper[4719]: I1124 09:42:53.614359 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sj5z6" podUID="627c95d5-4502-40f9-9d44-12bc566b74a2" containerName="registry-server" containerID="cri-o://2f618b2eabdcb8a29d1bf97fdc3f89405e6c023180fd4b75323c86ab98ffc205" gracePeriod=2 Nov 24 09:42:54 crc kubenswrapper[4719]: I1124 09:42:54.622638 4719 generic.go:334] "Generic (PLEG): container finished" podID="627c95d5-4502-40f9-9d44-12bc566b74a2" containerID="2f618b2eabdcb8a29d1bf97fdc3f89405e6c023180fd4b75323c86ab98ffc205" exitCode=0 Nov 24 09:42:54 crc kubenswrapper[4719]: I1124 09:42:54.622670 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj5z6" event={"ID":"627c95d5-4502-40f9-9d44-12bc566b74a2","Type":"ContainerDied","Data":"2f618b2eabdcb8a29d1bf97fdc3f89405e6c023180fd4b75323c86ab98ffc205"} Nov 24 09:42:54 crc kubenswrapper[4719]: I1124 09:42:54.727698 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:54 crc kubenswrapper[4719]: I1124 09:42:54.885187 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/627c95d5-4502-40f9-9d44-12bc566b74a2-utilities\") pod \"627c95d5-4502-40f9-9d44-12bc566b74a2\" (UID: \"627c95d5-4502-40f9-9d44-12bc566b74a2\") " Nov 24 09:42:54 crc kubenswrapper[4719]: I1124 09:42:54.885551 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/627c95d5-4502-40f9-9d44-12bc566b74a2-catalog-content\") pod \"627c95d5-4502-40f9-9d44-12bc566b74a2\" (UID: \"627c95d5-4502-40f9-9d44-12bc566b74a2\") " Nov 24 09:42:54 crc kubenswrapper[4719]: I1124 09:42:54.885679 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kql6\" (UniqueName: \"kubernetes.io/projected/627c95d5-4502-40f9-9d44-12bc566b74a2-kube-api-access-2kql6\") pod \"627c95d5-4502-40f9-9d44-12bc566b74a2\" (UID: \"627c95d5-4502-40f9-9d44-12bc566b74a2\") " Nov 24 09:42:54 crc kubenswrapper[4719]: I1124 09:42:54.885911 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/627c95d5-4502-40f9-9d44-12bc566b74a2-utilities" (OuterVolumeSpecName: "utilities") pod "627c95d5-4502-40f9-9d44-12bc566b74a2" (UID: "627c95d5-4502-40f9-9d44-12bc566b74a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:42:54 crc kubenswrapper[4719]: I1124 09:42:54.886241 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/627c95d5-4502-40f9-9d44-12bc566b74a2-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:42:54 crc kubenswrapper[4719]: I1124 09:42:54.895363 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/627c95d5-4502-40f9-9d44-12bc566b74a2-kube-api-access-2kql6" (OuterVolumeSpecName: "kube-api-access-2kql6") pod "627c95d5-4502-40f9-9d44-12bc566b74a2" (UID: "627c95d5-4502-40f9-9d44-12bc566b74a2"). InnerVolumeSpecName "kube-api-access-2kql6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:42:54 crc kubenswrapper[4719]: I1124 09:42:54.938845 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/627c95d5-4502-40f9-9d44-12bc566b74a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "627c95d5-4502-40f9-9d44-12bc566b74a2" (UID: "627c95d5-4502-40f9-9d44-12bc566b74a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:42:54 crc kubenswrapper[4719]: I1124 09:42:54.987806 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kql6\" (UniqueName: \"kubernetes.io/projected/627c95d5-4502-40f9-9d44-12bc566b74a2-kube-api-access-2kql6\") on node \"crc\" DevicePath \"\"" Nov 24 09:42:54 crc kubenswrapper[4719]: I1124 09:42:54.987849 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/627c95d5-4502-40f9-9d44-12bc566b74a2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:42:55 crc kubenswrapper[4719]: I1124 09:42:55.646615 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sj5z6" event={"ID":"627c95d5-4502-40f9-9d44-12bc566b74a2","Type":"ContainerDied","Data":"f625d085aa69a852012d0cf153f3d492eda0f923311c4d94b7b3429b2f23afa2"} Nov 24 09:42:55 crc kubenswrapper[4719]: I1124 09:42:55.646714 4719 scope.go:117] "RemoveContainer" containerID="2f618b2eabdcb8a29d1bf97fdc3f89405e6c023180fd4b75323c86ab98ffc205" Nov 24 09:42:55 crc kubenswrapper[4719]: I1124 09:42:55.647111 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sj5z6" Nov 24 09:42:55 crc kubenswrapper[4719]: I1124 09:42:55.683142 4719 scope.go:117] "RemoveContainer" containerID="9f6c7d83b1e594cd2528cc98aea2ae2d3a60b494eda5f49622c2f685c220ba8c" Nov 24 09:42:55 crc kubenswrapper[4719]: I1124 09:42:55.701969 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sj5z6"] Nov 24 09:42:55 crc kubenswrapper[4719]: I1124 09:42:55.711423 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sj5z6"] Nov 24 09:42:55 crc kubenswrapper[4719]: I1124 09:42:55.714572 4719 scope.go:117] "RemoveContainer" containerID="16c5b1731f6a70911889150d58b0c3c89231ef655ee457cd6603967bc93ebd80" Nov 24 09:42:56 crc kubenswrapper[4719]: I1124 09:42:56.532934 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="627c95d5-4502-40f9-9d44-12bc566b74a2" path="/var/lib/kubelet/pods/627c95d5-4502-40f9-9d44-12bc566b74a2/volumes" Nov 24 09:43:04 crc kubenswrapper[4719]: I1124 09:43:04.562099 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:43:04 crc kubenswrapper[4719]: I1124 09:43:04.562659 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:43:34 crc kubenswrapper[4719]: I1124 09:43:34.562287 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:43:34 crc kubenswrapper[4719]: I1124 09:43:34.562853 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:43:34 crc kubenswrapper[4719]: I1124 09:43:34.562917 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 09:43:34 crc kubenswrapper[4719]: I1124 09:43:34.564113 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:43:34 crc kubenswrapper[4719]: I1124 09:43:34.564206 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" gracePeriod=600 Nov 24 09:43:34 crc kubenswrapper[4719]: E1124 09:43:34.700985 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:43:35 crc kubenswrapper[4719]: I1124 09:43:35.013666 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" exitCode=0 Nov 24 09:43:35 crc kubenswrapper[4719]: I1124 09:43:35.013719 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e"} Nov 24 09:43:35 crc kubenswrapper[4719]: I1124 09:43:35.014020 4719 scope.go:117] "RemoveContainer" containerID="53d911277b59ae9e8329c3f97db6085cdd29d210d4bbd435a73653b6b25bf62a" Nov 24 09:43:35 crc kubenswrapper[4719]: I1124 09:43:35.014879 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:43:35 crc kubenswrapper[4719]: E1124 09:43:35.015294 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:43:43 crc kubenswrapper[4719]: I1124 09:43:43.092187 4719 generic.go:334] "Generic (PLEG): container finished" podID="e45a8b91-3c8a-4471-852f-d648ddadcf6f" containerID="b1b8db0a77d88151a8d4cd1f14115b653b5af3aed22c2679542f7b95f57ca493" exitCode=0 Nov 24 09:43:43 crc kubenswrapper[4719]: I1124 09:43:43.092231 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" event={"ID":"e45a8b91-3c8a-4471-852f-d648ddadcf6f","Type":"ContainerDied","Data":"b1b8db0a77d88151a8d4cd1f14115b653b5af3aed22c2679542f7b95f57ca493"} Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.485138 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.521289 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-inventory\") pod \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.521337 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-ssh-key\") pod \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.521377 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-ceph\") pod \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.521462 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-libvirt-secret-0\") pod \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.521556 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxgv8\" (UniqueName: \"kubernetes.io/projected/e45a8b91-3c8a-4471-852f-d648ddadcf6f-kube-api-access-sxgv8\") pod \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.521583 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-libvirt-combined-ca-bundle\") pod \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\" (UID: \"e45a8b91-3c8a-4471-852f-d648ddadcf6f\") " Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.533528 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e45a8b91-3c8a-4471-852f-d648ddadcf6f-kube-api-access-sxgv8" (OuterVolumeSpecName: "kube-api-access-sxgv8") pod "e45a8b91-3c8a-4471-852f-d648ddadcf6f" (UID: "e45a8b91-3c8a-4471-852f-d648ddadcf6f"). InnerVolumeSpecName "kube-api-access-sxgv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.545859 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e45a8b91-3c8a-4471-852f-d648ddadcf6f" (UID: "e45a8b91-3c8a-4471-852f-d648ddadcf6f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.546222 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-ceph" (OuterVolumeSpecName: "ceph") pod "e45a8b91-3c8a-4471-852f-d648ddadcf6f" (UID: "e45a8b91-3c8a-4471-852f-d648ddadcf6f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.555010 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "e45a8b91-3c8a-4471-852f-d648ddadcf6f" (UID: "e45a8b91-3c8a-4471-852f-d648ddadcf6f"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.557191 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-inventory" (OuterVolumeSpecName: "inventory") pod "e45a8b91-3c8a-4471-852f-d648ddadcf6f" (UID: "e45a8b91-3c8a-4471-852f-d648ddadcf6f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.569663 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e45a8b91-3c8a-4471-852f-d648ddadcf6f" (UID: "e45a8b91-3c8a-4471-852f-d648ddadcf6f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.624549 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.624573 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.624582 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.624590 4719 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.624601 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxgv8\" (UniqueName: \"kubernetes.io/projected/e45a8b91-3c8a-4471-852f-d648ddadcf6f-kube-api-access-sxgv8\") on node \"crc\" DevicePath \"\"" Nov 24 09:43:44 crc kubenswrapper[4719]: I1124 09:43:44.624609 4719 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45a8b91-3c8a-4471-852f-d648ddadcf6f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.114721 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" event={"ID":"e45a8b91-3c8a-4471-852f-d648ddadcf6f","Type":"ContainerDied","Data":"b0e7adf5e20dd61821aaf35140fc8aa98ed0c7d2498e813e8421f73c486ded56"} Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.114776 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0e7adf5e20dd61821aaf35140fc8aa98ed0c7d2498e813e8421f73c486ded56" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.114859 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.243909 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45"] Nov 24 09:43:45 crc kubenswrapper[4719]: E1124 09:43:45.244406 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="627c95d5-4502-40f9-9d44-12bc566b74a2" containerName="registry-server" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.244437 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="627c95d5-4502-40f9-9d44-12bc566b74a2" containerName="registry-server" Nov 24 09:43:45 crc kubenswrapper[4719]: E1124 09:43:45.244471 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="627c95d5-4502-40f9-9d44-12bc566b74a2" containerName="extract-content" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.244480 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="627c95d5-4502-40f9-9d44-12bc566b74a2" containerName="extract-content" Nov 24 09:43:45 crc kubenswrapper[4719]: E1124 09:43:45.244501 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="627c95d5-4502-40f9-9d44-12bc566b74a2" containerName="extract-utilities" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.244512 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="627c95d5-4502-40f9-9d44-12bc566b74a2" containerName="extract-utilities" Nov 24 09:43:45 crc kubenswrapper[4719]: E1124 09:43:45.244530 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45a8b91-3c8a-4471-852f-d648ddadcf6f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.244539 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45a8b91-3c8a-4471-852f-d648ddadcf6f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.244772 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="627c95d5-4502-40f9-9d44-12bc566b74a2" containerName="registry-server" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.244809 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45a8b91-3c8a-4471-852f-d648ddadcf6f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.245770 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.249990 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.250150 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-s7gxc" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.250228 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.250325 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.250532 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.250649 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.250758 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.250909 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.250934 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.260157 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45"] Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.334457 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c36f9bbf-22ba-458e-a531-081db1b99878-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.334508 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.334536 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/c36f9bbf-22ba-458e-a531-081db1b99878-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.334580 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.334669 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.334706 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.334734 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.334805 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.334836 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9z2t\" (UniqueName: \"kubernetes.io/projected/c36f9bbf-22ba-458e-a531-081db1b99878-kube-api-access-v9z2t\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.334862 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.335003 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.437256 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.437638 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.437760 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.437880 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.437971 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9z2t\" (UniqueName: \"kubernetes.io/projected/c36f9bbf-22ba-458e-a531-081db1b99878-kube-api-access-v9z2t\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.438123 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.438274 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.438940 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c36f9bbf-22ba-458e-a531-081db1b99878-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.439095 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.439254 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/c36f9bbf-22ba-458e-a531-081db1b99878-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.439375 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.439772 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c36f9bbf-22ba-458e-a531-081db1b99878-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.442216 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/c36f9bbf-22ba-458e-a531-081db1b99878-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.443736 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.444144 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.444613 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.444998 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.445055 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.445454 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.450531 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.453754 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.460392 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9z2t\" (UniqueName: \"kubernetes.io/projected/c36f9bbf-22ba-458e-a531-081db1b99878-kube-api-access-v9z2t\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:45 crc kubenswrapper[4719]: I1124 09:43:45.566073 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:43:46 crc kubenswrapper[4719]: I1124 09:43:46.135408 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45"] Nov 24 09:43:46 crc kubenswrapper[4719]: I1124 09:43:46.140773 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:43:47 crc kubenswrapper[4719]: I1124 09:43:47.137672 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" event={"ID":"c36f9bbf-22ba-458e-a531-081db1b99878","Type":"ContainerStarted","Data":"45fe8ded8c8289e497ca23535f43a97bc34dd894810e84f6f3ed9accdec4413b"} Nov 24 09:43:47 crc kubenswrapper[4719]: I1124 09:43:47.138119 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" event={"ID":"c36f9bbf-22ba-458e-a531-081db1b99878","Type":"ContainerStarted","Data":"adcec916947a9b562887d0dfbfda6a5d4bd87f1eefd7bd624588fe3c1465e859"} Nov 24 09:43:47 crc kubenswrapper[4719]: I1124 09:43:47.170273 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" podStartSLOduration=1.7384013189999998 podStartE2EDuration="2.170248967s" podCreationTimestamp="2025-11-24 09:43:45 +0000 UTC" firstStartedPulling="2025-11-24 09:43:46.140582058 +0000 UTC m=+3002.471855310" lastFinishedPulling="2025-11-24 09:43:46.572429706 +0000 UTC m=+3002.903702958" observedRunningTime="2025-11-24 09:43:47.16127365 +0000 UTC m=+3003.492546942" watchObservedRunningTime="2025-11-24 09:43:47.170248967 +0000 UTC m=+3003.501522229" Nov 24 09:43:48 crc kubenswrapper[4719]: I1124 09:43:48.521486 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:43:48 crc kubenswrapper[4719]: E1124 09:43:48.522695 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:44:00 crc kubenswrapper[4719]: I1124 09:44:00.521332 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:44:00 crc kubenswrapper[4719]: E1124 09:44:00.521984 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:44:09 crc kubenswrapper[4719]: I1124 09:44:09.314612 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vf45w"] Nov 24 09:44:09 crc kubenswrapper[4719]: I1124 09:44:09.322773 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:09 crc kubenswrapper[4719]: I1124 09:44:09.331377 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf45w"] Nov 24 09:44:09 crc kubenswrapper[4719]: I1124 09:44:09.367393 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwrsp\" (UniqueName: \"kubernetes.io/projected/92b12af0-f565-4418-b67a-3b2226036a35-kube-api-access-nwrsp\") pod \"redhat-marketplace-vf45w\" (UID: \"92b12af0-f565-4418-b67a-3b2226036a35\") " pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:09 crc kubenswrapper[4719]: I1124 09:44:09.367710 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b12af0-f565-4418-b67a-3b2226036a35-catalog-content\") pod \"redhat-marketplace-vf45w\" (UID: \"92b12af0-f565-4418-b67a-3b2226036a35\") " pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:09 crc kubenswrapper[4719]: I1124 09:44:09.367749 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b12af0-f565-4418-b67a-3b2226036a35-utilities\") pod \"redhat-marketplace-vf45w\" (UID: \"92b12af0-f565-4418-b67a-3b2226036a35\") " pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:09 crc kubenswrapper[4719]: I1124 09:44:09.468996 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwrsp\" (UniqueName: \"kubernetes.io/projected/92b12af0-f565-4418-b67a-3b2226036a35-kube-api-access-nwrsp\") pod \"redhat-marketplace-vf45w\" (UID: \"92b12af0-f565-4418-b67a-3b2226036a35\") " pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:09 crc kubenswrapper[4719]: I1124 09:44:09.469070 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b12af0-f565-4418-b67a-3b2226036a35-catalog-content\") pod \"redhat-marketplace-vf45w\" (UID: \"92b12af0-f565-4418-b67a-3b2226036a35\") " pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:09 crc kubenswrapper[4719]: I1124 09:44:09.469098 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b12af0-f565-4418-b67a-3b2226036a35-utilities\") pod \"redhat-marketplace-vf45w\" (UID: \"92b12af0-f565-4418-b67a-3b2226036a35\") " pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:09 crc kubenswrapper[4719]: I1124 09:44:09.469604 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b12af0-f565-4418-b67a-3b2226036a35-utilities\") pod \"redhat-marketplace-vf45w\" (UID: \"92b12af0-f565-4418-b67a-3b2226036a35\") " pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:09 crc kubenswrapper[4719]: I1124 09:44:09.469734 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b12af0-f565-4418-b67a-3b2226036a35-catalog-content\") pod \"redhat-marketplace-vf45w\" (UID: \"92b12af0-f565-4418-b67a-3b2226036a35\") " pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:09 crc kubenswrapper[4719]: I1124 09:44:09.489127 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwrsp\" (UniqueName: \"kubernetes.io/projected/92b12af0-f565-4418-b67a-3b2226036a35-kube-api-access-nwrsp\") pod \"redhat-marketplace-vf45w\" (UID: \"92b12af0-f565-4418-b67a-3b2226036a35\") " pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:09 crc kubenswrapper[4719]: I1124 09:44:09.641075 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:10 crc kubenswrapper[4719]: I1124 09:44:10.178739 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf45w"] Nov 24 09:44:10 crc kubenswrapper[4719]: I1124 09:44:10.333391 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf45w" event={"ID":"92b12af0-f565-4418-b67a-3b2226036a35","Type":"ContainerStarted","Data":"7ec2621cbe39867c19931c9d313908fc168eae09d84152f1b6f362e5d453b7a1"} Nov 24 09:44:11 crc kubenswrapper[4719]: I1124 09:44:11.342445 4719 generic.go:334] "Generic (PLEG): container finished" podID="92b12af0-f565-4418-b67a-3b2226036a35" containerID="20715f30803e358f6eb0a27610c105a1f6f0c5dd786410b0b342fa3056fa641f" exitCode=0 Nov 24 09:44:11 crc kubenswrapper[4719]: I1124 09:44:11.342495 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf45w" event={"ID":"92b12af0-f565-4418-b67a-3b2226036a35","Type":"ContainerDied","Data":"20715f30803e358f6eb0a27610c105a1f6f0c5dd786410b0b342fa3056fa641f"} Nov 24 09:44:13 crc kubenswrapper[4719]: I1124 09:44:13.362356 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf45w" event={"ID":"92b12af0-f565-4418-b67a-3b2226036a35","Type":"ContainerStarted","Data":"ba390f46b8a4f78be877838f4a860d0828d2338cfdbb2ac446bfc8bcdb3c3681"} Nov 24 09:44:14 crc kubenswrapper[4719]: I1124 09:44:14.373757 4719 generic.go:334] "Generic (PLEG): container finished" podID="92b12af0-f565-4418-b67a-3b2226036a35" containerID="ba390f46b8a4f78be877838f4a860d0828d2338cfdbb2ac446bfc8bcdb3c3681" exitCode=0 Nov 24 09:44:14 crc kubenswrapper[4719]: I1124 09:44:14.373881 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf45w" event={"ID":"92b12af0-f565-4418-b67a-3b2226036a35","Type":"ContainerDied","Data":"ba390f46b8a4f78be877838f4a860d0828d2338cfdbb2ac446bfc8bcdb3c3681"} Nov 24 09:44:14 crc kubenswrapper[4719]: I1124 09:44:14.547532 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:44:14 crc kubenswrapper[4719]: E1124 09:44:14.548365 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:44:15 crc kubenswrapper[4719]: I1124 09:44:15.384841 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf45w" event={"ID":"92b12af0-f565-4418-b67a-3b2226036a35","Type":"ContainerStarted","Data":"b6398ceec68229060c457c1099305fe36785790f2d644c142a48a843137403f8"} Nov 24 09:44:15 crc kubenswrapper[4719]: I1124 09:44:15.423093 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vf45w" podStartSLOduration=2.915511201 podStartE2EDuration="6.423065217s" podCreationTimestamp="2025-11-24 09:44:09 +0000 UTC" firstStartedPulling="2025-11-24 09:44:11.346090312 +0000 UTC m=+3027.677363554" lastFinishedPulling="2025-11-24 09:44:14.853644308 +0000 UTC m=+3031.184917570" observedRunningTime="2025-11-24 09:44:15.406916586 +0000 UTC m=+3031.738189848" watchObservedRunningTime="2025-11-24 09:44:15.423065217 +0000 UTC m=+3031.754338479" Nov 24 09:44:19 crc kubenswrapper[4719]: I1124 09:44:19.641940 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:19 crc kubenswrapper[4719]: I1124 09:44:19.643199 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:19 crc kubenswrapper[4719]: I1124 09:44:19.690763 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:20 crc kubenswrapper[4719]: I1124 09:44:20.467006 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:20 crc kubenswrapper[4719]: I1124 09:44:20.514679 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf45w"] Nov 24 09:44:22 crc kubenswrapper[4719]: I1124 09:44:22.437630 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vf45w" podUID="92b12af0-f565-4418-b67a-3b2226036a35" containerName="registry-server" containerID="cri-o://b6398ceec68229060c457c1099305fe36785790f2d644c142a48a843137403f8" gracePeriod=2 Nov 24 09:44:22 crc kubenswrapper[4719]: E1124 09:44:22.685598 4719 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92b12af0_f565_4418_b67a_3b2226036a35.slice/crio-conmon-b6398ceec68229060c457c1099305fe36785790f2d644c142a48a843137403f8.scope\": RecentStats: unable to find data in memory cache]" Nov 24 09:44:22 crc kubenswrapper[4719]: I1124 09:44:22.881953 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.017438 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b12af0-f565-4418-b67a-3b2226036a35-utilities\") pod \"92b12af0-f565-4418-b67a-3b2226036a35\" (UID: \"92b12af0-f565-4418-b67a-3b2226036a35\") " Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.018002 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b12af0-f565-4418-b67a-3b2226036a35-catalog-content\") pod \"92b12af0-f565-4418-b67a-3b2226036a35\" (UID: \"92b12af0-f565-4418-b67a-3b2226036a35\") " Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.018188 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwrsp\" (UniqueName: \"kubernetes.io/projected/92b12af0-f565-4418-b67a-3b2226036a35-kube-api-access-nwrsp\") pod \"92b12af0-f565-4418-b67a-3b2226036a35\" (UID: \"92b12af0-f565-4418-b67a-3b2226036a35\") " Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.018483 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92b12af0-f565-4418-b67a-3b2226036a35-utilities" (OuterVolumeSpecName: "utilities") pod "92b12af0-f565-4418-b67a-3b2226036a35" (UID: "92b12af0-f565-4418-b67a-3b2226036a35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.019263 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b12af0-f565-4418-b67a-3b2226036a35-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.025312 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92b12af0-f565-4418-b67a-3b2226036a35-kube-api-access-nwrsp" (OuterVolumeSpecName: "kube-api-access-nwrsp") pod "92b12af0-f565-4418-b67a-3b2226036a35" (UID: "92b12af0-f565-4418-b67a-3b2226036a35"). InnerVolumeSpecName "kube-api-access-nwrsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.039062 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92b12af0-f565-4418-b67a-3b2226036a35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92b12af0-f565-4418-b67a-3b2226036a35" (UID: "92b12af0-f565-4418-b67a-3b2226036a35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.122229 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b12af0-f565-4418-b67a-3b2226036a35-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.122268 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwrsp\" (UniqueName: \"kubernetes.io/projected/92b12af0-f565-4418-b67a-3b2226036a35-kube-api-access-nwrsp\") on node \"crc\" DevicePath \"\"" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.448546 4719 generic.go:334] "Generic (PLEG): container finished" podID="92b12af0-f565-4418-b67a-3b2226036a35" containerID="b6398ceec68229060c457c1099305fe36785790f2d644c142a48a843137403f8" exitCode=0 Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.448607 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf45w" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.448611 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf45w" event={"ID":"92b12af0-f565-4418-b67a-3b2226036a35","Type":"ContainerDied","Data":"b6398ceec68229060c457c1099305fe36785790f2d644c142a48a843137403f8"} Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.448977 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf45w" event={"ID":"92b12af0-f565-4418-b67a-3b2226036a35","Type":"ContainerDied","Data":"7ec2621cbe39867c19931c9d313908fc168eae09d84152f1b6f362e5d453b7a1"} Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.448999 4719 scope.go:117] "RemoveContainer" containerID="b6398ceec68229060c457c1099305fe36785790f2d644c142a48a843137403f8" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.492982 4719 scope.go:117] "RemoveContainer" containerID="ba390f46b8a4f78be877838f4a860d0828d2338cfdbb2ac446bfc8bcdb3c3681" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.513798 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf45w"] Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.517697 4719 scope.go:117] "RemoveContainer" containerID="20715f30803e358f6eb0a27610c105a1f6f0c5dd786410b0b342fa3056fa641f" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.529565 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf45w"] Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.553735 4719 scope.go:117] "RemoveContainer" containerID="b6398ceec68229060c457c1099305fe36785790f2d644c142a48a843137403f8" Nov 24 09:44:23 crc kubenswrapper[4719]: E1124 09:44:23.554349 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6398ceec68229060c457c1099305fe36785790f2d644c142a48a843137403f8\": container with ID starting with b6398ceec68229060c457c1099305fe36785790f2d644c142a48a843137403f8 not found: ID does not exist" containerID="b6398ceec68229060c457c1099305fe36785790f2d644c142a48a843137403f8" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.554383 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6398ceec68229060c457c1099305fe36785790f2d644c142a48a843137403f8"} err="failed to get container status \"b6398ceec68229060c457c1099305fe36785790f2d644c142a48a843137403f8\": rpc error: code = NotFound desc = could not find container \"b6398ceec68229060c457c1099305fe36785790f2d644c142a48a843137403f8\": container with ID starting with b6398ceec68229060c457c1099305fe36785790f2d644c142a48a843137403f8 not found: ID does not exist" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.554402 4719 scope.go:117] "RemoveContainer" containerID="ba390f46b8a4f78be877838f4a860d0828d2338cfdbb2ac446bfc8bcdb3c3681" Nov 24 09:44:23 crc kubenswrapper[4719]: E1124 09:44:23.554875 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba390f46b8a4f78be877838f4a860d0828d2338cfdbb2ac446bfc8bcdb3c3681\": container with ID starting with ba390f46b8a4f78be877838f4a860d0828d2338cfdbb2ac446bfc8bcdb3c3681 not found: ID does not exist" containerID="ba390f46b8a4f78be877838f4a860d0828d2338cfdbb2ac446bfc8bcdb3c3681" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.554910 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba390f46b8a4f78be877838f4a860d0828d2338cfdbb2ac446bfc8bcdb3c3681"} err="failed to get container status \"ba390f46b8a4f78be877838f4a860d0828d2338cfdbb2ac446bfc8bcdb3c3681\": rpc error: code = NotFound desc = could not find container \"ba390f46b8a4f78be877838f4a860d0828d2338cfdbb2ac446bfc8bcdb3c3681\": container with ID starting with ba390f46b8a4f78be877838f4a860d0828d2338cfdbb2ac446bfc8bcdb3c3681 not found: ID does not exist" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.554934 4719 scope.go:117] "RemoveContainer" containerID="20715f30803e358f6eb0a27610c105a1f6f0c5dd786410b0b342fa3056fa641f" Nov 24 09:44:23 crc kubenswrapper[4719]: E1124 09:44:23.555480 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20715f30803e358f6eb0a27610c105a1f6f0c5dd786410b0b342fa3056fa641f\": container with ID starting with 20715f30803e358f6eb0a27610c105a1f6f0c5dd786410b0b342fa3056fa641f not found: ID does not exist" containerID="20715f30803e358f6eb0a27610c105a1f6f0c5dd786410b0b342fa3056fa641f" Nov 24 09:44:23 crc kubenswrapper[4719]: I1124 09:44:23.555544 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20715f30803e358f6eb0a27610c105a1f6f0c5dd786410b0b342fa3056fa641f"} err="failed to get container status \"20715f30803e358f6eb0a27610c105a1f6f0c5dd786410b0b342fa3056fa641f\": rpc error: code = NotFound desc = could not find container \"20715f30803e358f6eb0a27610c105a1f6f0c5dd786410b0b342fa3056fa641f\": container with ID starting with 20715f30803e358f6eb0a27610c105a1f6f0c5dd786410b0b342fa3056fa641f not found: ID does not exist" Nov 24 09:44:24 crc kubenswrapper[4719]: I1124 09:44:24.532471 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92b12af0-f565-4418-b67a-3b2226036a35" path="/var/lib/kubelet/pods/92b12af0-f565-4418-b67a-3b2226036a35/volumes" Nov 24 09:44:29 crc kubenswrapper[4719]: I1124 09:44:29.520889 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:44:29 crc kubenswrapper[4719]: E1124 09:44:29.523152 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:44:42 crc kubenswrapper[4719]: I1124 09:44:42.520755 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:44:42 crc kubenswrapper[4719]: E1124 09:44:42.521670 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:44:55 crc kubenswrapper[4719]: I1124 09:44:55.521610 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:44:55 crc kubenswrapper[4719]: E1124 09:44:55.522388 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.172958 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb"] Nov 24 09:45:00 crc kubenswrapper[4719]: E1124 09:45:00.173923 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b12af0-f565-4418-b67a-3b2226036a35" containerName="extract-utilities" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.173937 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b12af0-f565-4418-b67a-3b2226036a35" containerName="extract-utilities" Nov 24 09:45:00 crc kubenswrapper[4719]: E1124 09:45:00.173945 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b12af0-f565-4418-b67a-3b2226036a35" containerName="extract-content" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.173950 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b12af0-f565-4418-b67a-3b2226036a35" containerName="extract-content" Nov 24 09:45:00 crc kubenswrapper[4719]: E1124 09:45:00.173965 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b12af0-f565-4418-b67a-3b2226036a35" containerName="registry-server" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.173971 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b12af0-f565-4418-b67a-3b2226036a35" containerName="registry-server" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.174184 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b12af0-f565-4418-b67a-3b2226036a35" containerName="registry-server" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.174752 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.182740 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb"] Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.217877 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.218338 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.321929 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c767\" (UniqueName: \"kubernetes.io/projected/5206f812-b695-46bc-9b4e-f913e6aaab0f-kube-api-access-4c767\") pod \"collect-profiles-29399625-z76wb\" (UID: \"5206f812-b695-46bc-9b4e-f913e6aaab0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.321978 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5206f812-b695-46bc-9b4e-f913e6aaab0f-config-volume\") pod \"collect-profiles-29399625-z76wb\" (UID: \"5206f812-b695-46bc-9b4e-f913e6aaab0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.322011 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5206f812-b695-46bc-9b4e-f913e6aaab0f-secret-volume\") pod \"collect-profiles-29399625-z76wb\" (UID: \"5206f812-b695-46bc-9b4e-f913e6aaab0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.423823 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c767\" (UniqueName: \"kubernetes.io/projected/5206f812-b695-46bc-9b4e-f913e6aaab0f-kube-api-access-4c767\") pod \"collect-profiles-29399625-z76wb\" (UID: \"5206f812-b695-46bc-9b4e-f913e6aaab0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.423883 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5206f812-b695-46bc-9b4e-f913e6aaab0f-config-volume\") pod \"collect-profiles-29399625-z76wb\" (UID: \"5206f812-b695-46bc-9b4e-f913e6aaab0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.423920 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5206f812-b695-46bc-9b4e-f913e6aaab0f-secret-volume\") pod \"collect-profiles-29399625-z76wb\" (UID: \"5206f812-b695-46bc-9b4e-f913e6aaab0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.425905 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5206f812-b695-46bc-9b4e-f913e6aaab0f-config-volume\") pod \"collect-profiles-29399625-z76wb\" (UID: \"5206f812-b695-46bc-9b4e-f913e6aaab0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.430113 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5206f812-b695-46bc-9b4e-f913e6aaab0f-secret-volume\") pod \"collect-profiles-29399625-z76wb\" (UID: \"5206f812-b695-46bc-9b4e-f913e6aaab0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.444635 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c767\" (UniqueName: \"kubernetes.io/projected/5206f812-b695-46bc-9b4e-f913e6aaab0f-kube-api-access-4c767\") pod \"collect-profiles-29399625-z76wb\" (UID: \"5206f812-b695-46bc-9b4e-f913e6aaab0f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" Nov 24 09:45:00 crc kubenswrapper[4719]: I1124 09:45:00.532108 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" Nov 24 09:45:01 crc kubenswrapper[4719]: I1124 09:45:01.025838 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb"] Nov 24 09:45:01 crc kubenswrapper[4719]: I1124 09:45:01.776641 4719 generic.go:334] "Generic (PLEG): container finished" podID="5206f812-b695-46bc-9b4e-f913e6aaab0f" containerID="a11904d7a6baf1d1d473f38aee7eb0a46d844fb170c294b7f829727fa3142a9f" exitCode=0 Nov 24 09:45:01 crc kubenswrapper[4719]: I1124 09:45:01.777418 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" event={"ID":"5206f812-b695-46bc-9b4e-f913e6aaab0f","Type":"ContainerDied","Data":"a11904d7a6baf1d1d473f38aee7eb0a46d844fb170c294b7f829727fa3142a9f"} Nov 24 09:45:01 crc kubenswrapper[4719]: I1124 09:45:01.777467 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" event={"ID":"5206f812-b695-46bc-9b4e-f913e6aaab0f","Type":"ContainerStarted","Data":"56542d1a2ad855d3898595817fd2717afea9576e40df67fd4b273fb11c863c6f"} Nov 24 09:45:03 crc kubenswrapper[4719]: I1124 09:45:03.095109 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" Nov 24 09:45:03 crc kubenswrapper[4719]: I1124 09:45:03.283521 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5206f812-b695-46bc-9b4e-f913e6aaab0f-secret-volume\") pod \"5206f812-b695-46bc-9b4e-f913e6aaab0f\" (UID: \"5206f812-b695-46bc-9b4e-f913e6aaab0f\") " Nov 24 09:45:03 crc kubenswrapper[4719]: I1124 09:45:03.283631 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5206f812-b695-46bc-9b4e-f913e6aaab0f-config-volume\") pod \"5206f812-b695-46bc-9b4e-f913e6aaab0f\" (UID: \"5206f812-b695-46bc-9b4e-f913e6aaab0f\") " Nov 24 09:45:03 crc kubenswrapper[4719]: I1124 09:45:03.283736 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c767\" (UniqueName: \"kubernetes.io/projected/5206f812-b695-46bc-9b4e-f913e6aaab0f-kube-api-access-4c767\") pod \"5206f812-b695-46bc-9b4e-f913e6aaab0f\" (UID: \"5206f812-b695-46bc-9b4e-f913e6aaab0f\") " Nov 24 09:45:03 crc kubenswrapper[4719]: I1124 09:45:03.284254 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5206f812-b695-46bc-9b4e-f913e6aaab0f-config-volume" (OuterVolumeSpecName: "config-volume") pod "5206f812-b695-46bc-9b4e-f913e6aaab0f" (UID: "5206f812-b695-46bc-9b4e-f913e6aaab0f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:45:03 crc kubenswrapper[4719]: I1124 09:45:03.289138 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5206f812-b695-46bc-9b4e-f913e6aaab0f-kube-api-access-4c767" (OuterVolumeSpecName: "kube-api-access-4c767") pod "5206f812-b695-46bc-9b4e-f913e6aaab0f" (UID: "5206f812-b695-46bc-9b4e-f913e6aaab0f"). InnerVolumeSpecName "kube-api-access-4c767". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:45:03 crc kubenswrapper[4719]: I1124 09:45:03.289349 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5206f812-b695-46bc-9b4e-f913e6aaab0f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5206f812-b695-46bc-9b4e-f913e6aaab0f" (UID: "5206f812-b695-46bc-9b4e-f913e6aaab0f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:45:03 crc kubenswrapper[4719]: I1124 09:45:03.385794 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c767\" (UniqueName: \"kubernetes.io/projected/5206f812-b695-46bc-9b4e-f913e6aaab0f-kube-api-access-4c767\") on node \"crc\" DevicePath \"\"" Nov 24 09:45:03 crc kubenswrapper[4719]: I1124 09:45:03.385830 4719 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5206f812-b695-46bc-9b4e-f913e6aaab0f-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:45:03 crc kubenswrapper[4719]: I1124 09:45:03.385841 4719 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5206f812-b695-46bc-9b4e-f913e6aaab0f-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 09:45:03 crc kubenswrapper[4719]: I1124 09:45:03.794717 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" event={"ID":"5206f812-b695-46bc-9b4e-f913e6aaab0f","Type":"ContainerDied","Data":"56542d1a2ad855d3898595817fd2717afea9576e40df67fd4b273fb11c863c6f"} Nov 24 09:45:03 crc kubenswrapper[4719]: I1124 09:45:03.795009 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56542d1a2ad855d3898595817fd2717afea9576e40df67fd4b273fb11c863c6f" Nov 24 09:45:03 crc kubenswrapper[4719]: I1124 09:45:03.794782 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399625-z76wb" Nov 24 09:45:04 crc kubenswrapper[4719]: I1124 09:45:04.167661 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg"] Nov 24 09:45:04 crc kubenswrapper[4719]: I1124 09:45:04.174733 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399580-q82qg"] Nov 24 09:45:04 crc kubenswrapper[4719]: I1124 09:45:04.532875 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee4ab863-3119-4f56-b1a3-b16105f0b7ed" path="/var/lib/kubelet/pods/ee4ab863-3119-4f56-b1a3-b16105f0b7ed/volumes" Nov 24 09:45:06 crc kubenswrapper[4719]: I1124 09:45:06.521613 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:45:06 crc kubenswrapper[4719]: E1124 09:45:06.521883 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:45:11 crc kubenswrapper[4719]: I1124 09:45:11.966769 4719 scope.go:117] "RemoveContainer" containerID="8a3d18ace2fb6cc6fa4d7f7f8739db8e2d1791e46f04074d102ac5b217642a4b" Nov 24 09:45:20 crc kubenswrapper[4719]: I1124 09:45:20.520615 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:45:20 crc kubenswrapper[4719]: E1124 09:45:20.521582 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:45:33 crc kubenswrapper[4719]: I1124 09:45:33.520328 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:45:33 crc kubenswrapper[4719]: E1124 09:45:33.521933 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:45:47 crc kubenswrapper[4719]: I1124 09:45:47.521082 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:45:47 crc kubenswrapper[4719]: E1124 09:45:47.521909 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:46:01 crc kubenswrapper[4719]: I1124 09:46:01.521057 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:46:01 crc kubenswrapper[4719]: E1124 09:46:01.521874 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:46:16 crc kubenswrapper[4719]: I1124 09:46:16.520766 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:46:16 crc kubenswrapper[4719]: E1124 09:46:16.521546 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:46:29 crc kubenswrapper[4719]: I1124 09:46:29.521206 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:46:29 crc kubenswrapper[4719]: E1124 09:46:29.523653 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:46:40 crc kubenswrapper[4719]: I1124 09:46:40.521103 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:46:40 crc kubenswrapper[4719]: E1124 09:46:40.521756 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:46:53 crc kubenswrapper[4719]: I1124 09:46:53.521141 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:46:53 crc kubenswrapper[4719]: E1124 09:46:53.521891 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:47:05 crc kubenswrapper[4719]: I1124 09:47:05.520595 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:47:05 crc kubenswrapper[4719]: E1124 09:47:05.522344 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:47:09 crc kubenswrapper[4719]: I1124 09:47:09.972866 4719 generic.go:334] "Generic (PLEG): container finished" podID="c36f9bbf-22ba-458e-a531-081db1b99878" containerID="45fe8ded8c8289e497ca23535f43a97bc34dd894810e84f6f3ed9accdec4413b" exitCode=0 Nov 24 09:47:09 crc kubenswrapper[4719]: I1124 09:47:09.973025 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" event={"ID":"c36f9bbf-22ba-458e-a531-081db1b99878","Type":"ContainerDied","Data":"45fe8ded8c8289e497ca23535f43a97bc34dd894810e84f6f3ed9accdec4413b"} Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.386202 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.484125 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9z2t\" (UniqueName: \"kubernetes.io/projected/c36f9bbf-22ba-458e-a531-081db1b99878-kube-api-access-v9z2t\") pod \"c36f9bbf-22ba-458e-a531-081db1b99878\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.484221 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-inventory\") pod \"c36f9bbf-22ba-458e-a531-081db1b99878\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.484254 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-cell1-compute-config-1\") pod \"c36f9bbf-22ba-458e-a531-081db1b99878\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.484276 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c36f9bbf-22ba-458e-a531-081db1b99878-nova-extra-config-0\") pod \"c36f9bbf-22ba-458e-a531-081db1b99878\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.484315 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-migration-ssh-key-0\") pod \"c36f9bbf-22ba-458e-a531-081db1b99878\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.484367 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-migration-ssh-key-1\") pod \"c36f9bbf-22ba-458e-a531-081db1b99878\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.484404 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/c36f9bbf-22ba-458e-a531-081db1b99878-ceph-nova-0\") pod \"c36f9bbf-22ba-458e-a531-081db1b99878\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.484435 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-cell1-compute-config-0\") pod \"c36f9bbf-22ba-458e-a531-081db1b99878\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.484478 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-ceph\") pod \"c36f9bbf-22ba-458e-a531-081db1b99878\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.484519 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-custom-ceph-combined-ca-bundle\") pod \"c36f9bbf-22ba-458e-a531-081db1b99878\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.484579 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-ssh-key\") pod \"c36f9bbf-22ba-458e-a531-081db1b99878\" (UID: \"c36f9bbf-22ba-458e-a531-081db1b99878\") " Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.490594 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c36f9bbf-22ba-458e-a531-081db1b99878-kube-api-access-v9z2t" (OuterVolumeSpecName: "kube-api-access-v9z2t") pod "c36f9bbf-22ba-458e-a531-081db1b99878" (UID: "c36f9bbf-22ba-458e-a531-081db1b99878"). InnerVolumeSpecName "kube-api-access-v9z2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.503839 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-ceph" (OuterVolumeSpecName: "ceph") pod "c36f9bbf-22ba-458e-a531-081db1b99878" (UID: "c36f9bbf-22ba-458e-a531-081db1b99878"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.508014 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "c36f9bbf-22ba-458e-a531-081db1b99878" (UID: "c36f9bbf-22ba-458e-a531-081db1b99878"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.511509 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c36f9bbf-22ba-458e-a531-081db1b99878-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "c36f9bbf-22ba-458e-a531-081db1b99878" (UID: "c36f9bbf-22ba-458e-a531-081db1b99878"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.516887 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c36f9bbf-22ba-458e-a531-081db1b99878" (UID: "c36f9bbf-22ba-458e-a531-081db1b99878"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.523822 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c36f9bbf-22ba-458e-a531-081db1b99878" (UID: "c36f9bbf-22ba-458e-a531-081db1b99878"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.524446 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-inventory" (OuterVolumeSpecName: "inventory") pod "c36f9bbf-22ba-458e-a531-081db1b99878" (UID: "c36f9bbf-22ba-458e-a531-081db1b99878"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.525203 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c36f9bbf-22ba-458e-a531-081db1b99878" (UID: "c36f9bbf-22ba-458e-a531-081db1b99878"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.529865 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c36f9bbf-22ba-458e-a531-081db1b99878-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "c36f9bbf-22ba-458e-a531-081db1b99878" (UID: "c36f9bbf-22ba-458e-a531-081db1b99878"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.530692 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c36f9bbf-22ba-458e-a531-081db1b99878" (UID: "c36f9bbf-22ba-458e-a531-081db1b99878"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.543872 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c36f9bbf-22ba-458e-a531-081db1b99878" (UID: "c36f9bbf-22ba-458e-a531-081db1b99878"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.586390 4719 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c36f9bbf-22ba-458e-a531-081db1b99878-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.586440 4719 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.586452 4719 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.586460 4719 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/c36f9bbf-22ba-458e-a531-081db1b99878-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.586468 4719 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.586529 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.586561 4719 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.586569 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.586578 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9z2t\" (UniqueName: \"kubernetes.io/projected/c36f9bbf-22ba-458e-a531-081db1b99878-kube-api-access-v9z2t\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.586587 4719 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.586595 4719 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c36f9bbf-22ba-458e-a531-081db1b99878-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.993529 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" event={"ID":"c36f9bbf-22ba-458e-a531-081db1b99878","Type":"ContainerDied","Data":"adcec916947a9b562887d0dfbfda6a5d4bd87f1eefd7bd624588fe3c1465e859"} Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.993575 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adcec916947a9b562887d0dfbfda6a5d4bd87f1eefd7bd624588fe3c1465e859" Nov 24 09:47:11 crc kubenswrapper[4719]: I1124 09:47:11.993605 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45" Nov 24 09:47:20 crc kubenswrapper[4719]: I1124 09:47:20.521291 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:47:20 crc kubenswrapper[4719]: E1124 09:47:20.522072 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:47:27 crc kubenswrapper[4719]: I1124 09:47:27.875136 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Nov 24 09:47:27 crc kubenswrapper[4719]: E1124 09:47:27.875886 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36f9bbf-22ba-458e-a531-081db1b99878" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 24 09:47:27 crc kubenswrapper[4719]: I1124 09:47:27.875898 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36f9bbf-22ba-458e-a531-081db1b99878" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 24 09:47:27 crc kubenswrapper[4719]: E1124 09:47:27.875932 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5206f812-b695-46bc-9b4e-f913e6aaab0f" containerName="collect-profiles" Nov 24 09:47:27 crc kubenswrapper[4719]: I1124 09:47:27.875940 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5206f812-b695-46bc-9b4e-f913e6aaab0f" containerName="collect-profiles" Nov 24 09:47:27 crc kubenswrapper[4719]: I1124 09:47:27.876125 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="5206f812-b695-46bc-9b4e-f913e6aaab0f" containerName="collect-profiles" Nov 24 09:47:27 crc kubenswrapper[4719]: I1124 09:47:27.876137 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="c36f9bbf-22ba-458e-a531-081db1b99878" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 24 09:47:27 crc kubenswrapper[4719]: I1124 09:47:27.877018 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 24 09:47:27 crc kubenswrapper[4719]: I1124 09:47:27.879585 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 24 09:47:27 crc kubenswrapper[4719]: I1124 09:47:27.879621 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 24 09:47:27 crc kubenswrapper[4719]: I1124 09:47:27.909423 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 24 09:47:27 crc kubenswrapper[4719]: I1124 09:47:27.911213 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:27 crc kubenswrapper[4719]: I1124 09:47:27.913508 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 24 09:47:27 crc kubenswrapper[4719]: I1124 09:47:27.929783 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 24 09:47:27 crc kubenswrapper[4719]: I1124 09:47:27.969013 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988262 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-lib-modules\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988366 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-dev\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988401 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d9e3bfc-9c58-4534-89f9-72f35c264a80-scripts\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988459 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d9e3bfc-9c58-4534-89f9-72f35c264a80-config-data\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988497 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988537 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988563 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988590 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9d9e3bfc-9c58-4534-89f9-72f35c264a80-ceph\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988622 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d9e3bfc-9c58-4534-89f9-72f35c264a80-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988659 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988693 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d9e3bfc-9c58-4534-89f9-72f35c264a80-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988728 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45t6z\" (UniqueName: \"kubernetes.io/projected/9d9e3bfc-9c58-4534-89f9-72f35c264a80-kube-api-access-45t6z\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988764 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988818 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988842 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-sys\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:27.988898 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-run\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.090844 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.090892 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.090920 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx8fd\" (UniqueName: \"kubernetes.io/projected/82bfb246-8a64-46b7-9223-f2158b114186-kube-api-access-tx8fd\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.090940 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.090959 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9d9e3bfc-9c58-4534-89f9-72f35c264a80-ceph\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.090978 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-run\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.090997 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d9e3bfc-9c58-4534-89f9-72f35c264a80-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091013 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091050 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/82bfb246-8a64-46b7-9223-f2158b114186-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091078 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091102 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d9e3bfc-9c58-4534-89f9-72f35c264a80-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091124 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45t6z\" (UniqueName: \"kubernetes.io/projected/9d9e3bfc-9c58-4534-89f9-72f35c264a80-kube-api-access-45t6z\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091145 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091165 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82bfb246-8a64-46b7-9223-f2158b114186-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091183 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091202 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091226 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091245 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-sys\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091264 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-sys\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091294 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bfb246-8a64-46b7-9223-f2158b114186-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091310 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bfb246-8a64-46b7-9223-f2158b114186-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091328 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-run\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091345 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-lib-modules\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091364 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091386 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091415 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82bfb246-8a64-46b7-9223-f2158b114186-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091435 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-dev\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091457 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d9e3bfc-9c58-4534-89f9-72f35c264a80-scripts\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091474 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091491 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091518 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-dev\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.091539 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d9e3bfc-9c58-4534-89f9-72f35c264a80-config-data\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.092359 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.092422 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-run\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.092435 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-sys\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.092445 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.092466 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-lib-modules\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.092379 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.092881 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.092991 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.093086 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.093190 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9d9e3bfc-9c58-4534-89f9-72f35c264a80-dev\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.098141 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d9e3bfc-9c58-4534-89f9-72f35c264a80-config-data\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.106021 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d9e3bfc-9c58-4534-89f9-72f35c264a80-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.106371 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9d9e3bfc-9c58-4534-89f9-72f35c264a80-ceph\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.108484 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d9e3bfc-9c58-4534-89f9-72f35c264a80-scripts\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.109612 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45t6z\" (UniqueName: \"kubernetes.io/projected/9d9e3bfc-9c58-4534-89f9-72f35c264a80-kube-api-access-45t6z\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.124164 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d9e3bfc-9c58-4534-89f9-72f35c264a80-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9d9e3bfc-9c58-4534-89f9-72f35c264a80\") " pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195318 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82bfb246-8a64-46b7-9223-f2158b114186-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195367 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195395 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195426 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-sys\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195464 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bfb246-8a64-46b7-9223-f2158b114186-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195484 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bfb246-8a64-46b7-9223-f2158b114186-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195511 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195539 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195574 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82bfb246-8a64-46b7-9223-f2158b114186-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195612 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195636 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195670 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-dev\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195710 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx8fd\" (UniqueName: \"kubernetes.io/projected/82bfb246-8a64-46b7-9223-f2158b114186-kube-api-access-tx8fd\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195746 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-run\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195767 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.195786 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/82bfb246-8a64-46b7-9223-f2158b114186-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.196576 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.199507 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-sys\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.199682 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.199754 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.199787 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-dev\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.200239 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-run\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.200732 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.201231 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.201279 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.201317 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.204142 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/82bfb246-8a64-46b7-9223-f2158b114186-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.204479 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82bfb246-8a64-46b7-9223-f2158b114186-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.205408 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82bfb246-8a64-46b7-9223-f2158b114186-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.206275 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/82bfb246-8a64-46b7-9223-f2158b114186-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.230555 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82bfb246-8a64-46b7-9223-f2158b114186-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.240279 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82bfb246-8a64-46b7-9223-f2158b114186-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.247934 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx8fd\" (UniqueName: \"kubernetes.io/projected/82bfb246-8a64-46b7-9223-f2158b114186-kube-api-access-tx8fd\") pod \"cinder-volume-volume1-0\" (UID: \"82bfb246-8a64-46b7-9223-f2158b114186\") " pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.538165 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.694797 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.696267 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.700023 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.700228 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vwfrr" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.700344 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.700464 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.772216 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.813977 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8rwf\" (UniqueName: \"kubernetes.io/projected/d3162dd7-a503-44f7-a1e9-8d617948d14a-kube-api-access-d8rwf\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.815074 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d3162dd7-a503-44f7-a1e9-8d617948d14a-ceph\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.815205 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-scripts\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.815237 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3162dd7-a503-44f7-a1e9-8d617948d14a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.815269 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3162dd7-a503-44f7-a1e9-8d617948d14a-logs\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.815297 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-config-data\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.815412 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.815438 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.815485 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.841454 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.845238 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.849532 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.851837 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.861950 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918328 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8rwf\" (UniqueName: \"kubernetes.io/projected/d3162dd7-a503-44f7-a1e9-8d617948d14a-kube-api-access-d8rwf\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918386 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5nns\" (UniqueName: \"kubernetes.io/projected/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-kube-api-access-t5nns\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918442 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918476 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d3162dd7-a503-44f7-a1e9-8d617948d14a-ceph\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918507 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918538 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918569 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918599 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918624 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918668 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-scripts\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918685 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918708 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3162dd7-a503-44f7-a1e9-8d617948d14a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918726 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3162dd7-a503-44f7-a1e9-8d617948d14a-logs\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918744 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-config-data\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.918804 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.920725 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3162dd7-a503-44f7-a1e9-8d617948d14a-logs\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.922012 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3162dd7-a503-44f7-a1e9-8d617948d14a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.922319 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.922691 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.922982 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.923026 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.940868 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d3162dd7-a503-44f7-a1e9-8d617948d14a-ceph\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.941754 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.948384 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-scripts\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.950578 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-config-data\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.955057 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.978674 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8rwf\" (UniqueName: \"kubernetes.io/projected/d3162dd7-a503-44f7-a1e9-8d617948d14a-kube-api-access-d8rwf\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:28 crc kubenswrapper[4719]: I1124 09:47:28.997300 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.029277 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.029337 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.029371 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.029393 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.029413 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.029474 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.029559 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.029617 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5nns\" (UniqueName: \"kubernetes.io/projected/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-kube-api-access-t5nns\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.029669 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.034879 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-logs\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.037317 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.037579 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.041928 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.042055 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.051793 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-5xnf7"] Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.052909 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-5xnf7" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.065347 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.066177 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.080772 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.084010 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5nns\" (UniqueName: \"kubernetes.io/projected/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-kube-api-access-t5nns\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.095163 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.209481 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-5xnf7"] Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.231085 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9ffj\" (UniqueName: \"kubernetes.io/projected/fef8c035-164f-4eab-9e45-70e0bdd48b10-kube-api-access-g9ffj\") pod \"manila-db-create-5xnf7\" (UID: \"fef8c035-164f-4eab-9e45-70e0bdd48b10\") " pod="openstack/manila-db-create-5xnf7" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.231399 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fef8c035-164f-4eab-9e45-70e0bdd48b10-operator-scripts\") pod \"manila-db-create-5xnf7\" (UID: \"fef8c035-164f-4eab-9e45-70e0bdd48b10\") " pod="openstack/manila-db-create-5xnf7" Nov 24 09:47:29 crc kubenswrapper[4719]: W1124 09:47:29.237388 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d9e3bfc_9c58_4534_89f9_72f35c264a80.slice/crio-640e2598c812c00f6be1093fc2ff860907db05207c75fd32aab4a7f2fca8d971 WatchSource:0}: Error finding container 640e2598c812c00f6be1093fc2ff860907db05207c75fd32aab4a7f2fca8d971: Status 404 returned error can't find the container with id 640e2598c812c00f6be1093fc2ff860907db05207c75fd32aab4a7f2fca8d971 Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.295791 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.337098 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9ffj\" (UniqueName: \"kubernetes.io/projected/fef8c035-164f-4eab-9e45-70e0bdd48b10-kube-api-access-g9ffj\") pod \"manila-db-create-5xnf7\" (UID: \"fef8c035-164f-4eab-9e45-70e0bdd48b10\") " pod="openstack/manila-db-create-5xnf7" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.336998 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-ae80-account-create-m6k9l"] Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.337238 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fef8c035-164f-4eab-9e45-70e0bdd48b10-operator-scripts\") pod \"manila-db-create-5xnf7\" (UID: \"fef8c035-164f-4eab-9e45-70e0bdd48b10\") " pod="openstack/manila-db-create-5xnf7" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.345066 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fef8c035-164f-4eab-9e45-70e0bdd48b10-operator-scripts\") pod \"manila-db-create-5xnf7\" (UID: \"fef8c035-164f-4eab-9e45-70e0bdd48b10\") " pod="openstack/manila-db-create-5xnf7" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.347419 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-ae80-account-create-m6k9l" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.354718 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.375391 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9ffj\" (UniqueName: \"kubernetes.io/projected/fef8c035-164f-4eab-9e45-70e0bdd48b10-kube-api-access-g9ffj\") pod \"manila-db-create-5xnf7\" (UID: \"fef8c035-164f-4eab-9e45-70e0bdd48b10\") " pod="openstack/manila-db-create-5xnf7" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.410107 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-ae80-account-create-m6k9l"] Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.410766 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-5xnf7" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.447062 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.479335 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9f56fdb97-g5shh"] Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.481321 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.484635 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-jq6ph" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.487128 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.487986 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.488686 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.489199 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9f56fdb97-g5shh"] Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.489721 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.495383 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.511925 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.548628 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5144bf8-a3e7-4c00-aca4-c9d0e02bf441-operator-scripts\") pod \"manila-ae80-account-create-m6k9l\" (UID: \"e5144bf8-a3e7-4c00-aca4-c9d0e02bf441\") " pod="openstack/manila-ae80-account-create-m6k9l" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.549879 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5gv9\" (UniqueName: \"kubernetes.io/projected/e5144bf8-a3e7-4c00-aca4-c9d0e02bf441-kube-api-access-t5gv9\") pod \"manila-ae80-account-create-m6k9l\" (UID: \"e5144bf8-a3e7-4c00-aca4-c9d0e02bf441\") " pod="openstack/manila-ae80-account-create-m6k9l" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.557552 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-678d5454cc-t98tb"] Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.559000 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.577516 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-678d5454cc-t98tb"] Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.652241 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a1c2d07f-677d-422e-a815-68ab2298cc39-horizon-secret-key\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.652298 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4a2a599-ea1c-4571-8dbe-afd67c313647-scripts\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.652327 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd7sl\" (UniqueName: \"kubernetes.io/projected/b4a2a599-ea1c-4571-8dbe-afd67c313647-kube-api-access-cd7sl\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.652360 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1c2d07f-677d-422e-a815-68ab2298cc39-logs\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.652383 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4a2a599-ea1c-4571-8dbe-afd67c313647-config-data\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.652410 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1c2d07f-677d-422e-a815-68ab2298cc39-config-data\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.652424 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1c2d07f-677d-422e-a815-68ab2298cc39-scripts\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.652452 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5144bf8-a3e7-4c00-aca4-c9d0e02bf441-operator-scripts\") pod \"manila-ae80-account-create-m6k9l\" (UID: \"e5144bf8-a3e7-4c00-aca4-c9d0e02bf441\") " pod="openstack/manila-ae80-account-create-m6k9l" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.652479 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a2a599-ea1c-4571-8dbe-afd67c313647-logs\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.652498 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5gv9\" (UniqueName: \"kubernetes.io/projected/e5144bf8-a3e7-4c00-aca4-c9d0e02bf441-kube-api-access-t5gv9\") pod \"manila-ae80-account-create-m6k9l\" (UID: \"e5144bf8-a3e7-4c00-aca4-c9d0e02bf441\") " pod="openstack/manila-ae80-account-create-m6k9l" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.652541 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjbjf\" (UniqueName: \"kubernetes.io/projected/a1c2d07f-677d-422e-a815-68ab2298cc39-kube-api-access-vjbjf\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.652572 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b4a2a599-ea1c-4571-8dbe-afd67c313647-horizon-secret-key\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.653289 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5144bf8-a3e7-4c00-aca4-c9d0e02bf441-operator-scripts\") pod \"manila-ae80-account-create-m6k9l\" (UID: \"e5144bf8-a3e7-4c00-aca4-c9d0e02bf441\") " pod="openstack/manila-ae80-account-create-m6k9l" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.680730 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5gv9\" (UniqueName: \"kubernetes.io/projected/e5144bf8-a3e7-4c00-aca4-c9d0e02bf441-kube-api-access-t5gv9\") pod \"manila-ae80-account-create-m6k9l\" (UID: \"e5144bf8-a3e7-4c00-aca4-c9d0e02bf441\") " pod="openstack/manila-ae80-account-create-m6k9l" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.755489 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a1c2d07f-677d-422e-a815-68ab2298cc39-horizon-secret-key\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.755546 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4a2a599-ea1c-4571-8dbe-afd67c313647-scripts\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.755575 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd7sl\" (UniqueName: \"kubernetes.io/projected/b4a2a599-ea1c-4571-8dbe-afd67c313647-kube-api-access-cd7sl\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.755613 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1c2d07f-677d-422e-a815-68ab2298cc39-logs\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.755643 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4a2a599-ea1c-4571-8dbe-afd67c313647-config-data\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.755681 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1c2d07f-677d-422e-a815-68ab2298cc39-config-data\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.755700 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1c2d07f-677d-422e-a815-68ab2298cc39-scripts\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.755743 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a2a599-ea1c-4571-8dbe-afd67c313647-logs\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.755801 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjbjf\" (UniqueName: \"kubernetes.io/projected/a1c2d07f-677d-422e-a815-68ab2298cc39-kube-api-access-vjbjf\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.755839 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b4a2a599-ea1c-4571-8dbe-afd67c313647-horizon-secret-key\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.756908 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4a2a599-ea1c-4571-8dbe-afd67c313647-scripts\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.760958 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-ae80-account-create-m6k9l" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.761780 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4a2a599-ea1c-4571-8dbe-afd67c313647-config-data\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.774947 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1c2d07f-677d-422e-a815-68ab2298cc39-logs\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.775986 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a1c2d07f-677d-422e-a815-68ab2298cc39-horizon-secret-key\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.777272 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b4a2a599-ea1c-4571-8dbe-afd67c313647-horizon-secret-key\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.777602 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1c2d07f-677d-422e-a815-68ab2298cc39-scripts\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.783567 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a2a599-ea1c-4571-8dbe-afd67c313647-logs\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.784814 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1c2d07f-677d-422e-a815-68ab2298cc39-config-data\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.785662 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjbjf\" (UniqueName: \"kubernetes.io/projected/a1c2d07f-677d-422e-a815-68ab2298cc39-kube-api-access-vjbjf\") pod \"horizon-678d5454cc-t98tb\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.801953 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd7sl\" (UniqueName: \"kubernetes.io/projected/b4a2a599-ea1c-4571-8dbe-afd67c313647-kube-api-access-cd7sl\") pod \"horizon-9f56fdb97-g5shh\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.849552 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.882648 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 24 09:47:29 crc kubenswrapper[4719]: I1124 09:47:29.890415 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:30 crc kubenswrapper[4719]: I1124 09:47:30.222573 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9d9e3bfc-9c58-4534-89f9-72f35c264a80","Type":"ContainerStarted","Data":"640e2598c812c00f6be1093fc2ff860907db05207c75fd32aab4a7f2fca8d971"} Nov 24 09:47:30 crc kubenswrapper[4719]: I1124 09:47:30.225555 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"82bfb246-8a64-46b7-9223-f2158b114186","Type":"ContainerStarted","Data":"78a5cd007e0aad1b5a44dce3fa0bf2b8653d20c6a327a94b15e35b2869afdf5e"} Nov 24 09:47:30 crc kubenswrapper[4719]: I1124 09:47:30.378388 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-5xnf7"] Nov 24 09:47:30 crc kubenswrapper[4719]: I1124 09:47:30.489572 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:47:30 crc kubenswrapper[4719]: I1124 09:47:30.649159 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:47:30 crc kubenswrapper[4719]: W1124 09:47:30.684261 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfddc29ba_44c9_4eaf_b7ca_47a5e94f62f9.slice/crio-1cf56c57e8a98520e588295f03a2f567d6ada6c83bde5e8d261501737e3bb47c WatchSource:0}: Error finding container 1cf56c57e8a98520e588295f03a2f567d6ada6c83bde5e8d261501737e3bb47c: Status 404 returned error can't find the container with id 1cf56c57e8a98520e588295f03a2f567d6ada6c83bde5e8d261501737e3bb47c Nov 24 09:47:30 crc kubenswrapper[4719]: I1124 09:47:30.713266 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9f56fdb97-g5shh"] Nov 24 09:47:30 crc kubenswrapper[4719]: W1124 09:47:30.718330 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4a2a599_ea1c_4571_8dbe_afd67c313647.slice/crio-91c61dd03e658261a3c0efab65fb0f9cd66e50d6782c6aed4d06aa0f9f82323e WatchSource:0}: Error finding container 91c61dd03e658261a3c0efab65fb0f9cd66e50d6782c6aed4d06aa0f9f82323e: Status 404 returned error can't find the container with id 91c61dd03e658261a3c0efab65fb0f9cd66e50d6782c6aed4d06aa0f9f82323e Nov 24 09:47:30 crc kubenswrapper[4719]: I1124 09:47:30.828924 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-ae80-account-create-m6k9l"] Nov 24 09:47:30 crc kubenswrapper[4719]: I1124 09:47:30.841081 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-678d5454cc-t98tb"] Nov 24 09:47:31 crc kubenswrapper[4719]: I1124 09:47:31.266023 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d3162dd7-a503-44f7-a1e9-8d617948d14a","Type":"ContainerStarted","Data":"24bf33906a59d914bc2dd2d29246c76ef6360cfe1529d3989e40c27b744c8f48"} Nov 24 09:47:31 crc kubenswrapper[4719]: I1124 09:47:31.275029 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9","Type":"ContainerStarted","Data":"1cf56c57e8a98520e588295f03a2f567d6ada6c83bde5e8d261501737e3bb47c"} Nov 24 09:47:31 crc kubenswrapper[4719]: I1124 09:47:31.290247 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9f56fdb97-g5shh" event={"ID":"b4a2a599-ea1c-4571-8dbe-afd67c313647","Type":"ContainerStarted","Data":"91c61dd03e658261a3c0efab65fb0f9cd66e50d6782c6aed4d06aa0f9f82323e"} Nov 24 09:47:31 crc kubenswrapper[4719]: I1124 09:47:31.299343 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-ae80-account-create-m6k9l" event={"ID":"e5144bf8-a3e7-4c00-aca4-c9d0e02bf441","Type":"ContainerStarted","Data":"8881051cc5fc59ebd5f5c1ac7cd147c280c35eb14a3096cee38b57826fac4c57"} Nov 24 09:47:31 crc kubenswrapper[4719]: I1124 09:47:31.299387 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-ae80-account-create-m6k9l" event={"ID":"e5144bf8-a3e7-4c00-aca4-c9d0e02bf441","Type":"ContainerStarted","Data":"05071b61ffd9f9b752c24823d2aed4ca6e58bfdd02f09f1b1f8c0500a92f5c76"} Nov 24 09:47:31 crc kubenswrapper[4719]: I1124 09:47:31.320572 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-ae80-account-create-m6k9l" podStartSLOduration=2.3205530899999998 podStartE2EDuration="2.32055309s" podCreationTimestamp="2025-11-24 09:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:31.319270823 +0000 UTC m=+3227.650544085" watchObservedRunningTime="2025-11-24 09:47:31.32055309 +0000 UTC m=+3227.651826362" Nov 24 09:47:31 crc kubenswrapper[4719]: I1124 09:47:31.321147 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-5xnf7" event={"ID":"fef8c035-164f-4eab-9e45-70e0bdd48b10","Type":"ContainerStarted","Data":"e708192b8302e957628273cc52bff9da6b4101b1e6e1e796fdf9a9b5fe3539c5"} Nov 24 09:47:31 crc kubenswrapper[4719]: I1124 09:47:31.321179 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-5xnf7" event={"ID":"fef8c035-164f-4eab-9e45-70e0bdd48b10","Type":"ContainerStarted","Data":"00c4f73cd5be5b0386a987717252d88342bc4396b4c317032b3a68cc40163681"} Nov 24 09:47:31 crc kubenswrapper[4719]: I1124 09:47:31.327782 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-678d5454cc-t98tb" event={"ID":"a1c2d07f-677d-422e-a815-68ab2298cc39","Type":"ContainerStarted","Data":"28da3b1e5c2492ef9bd76a1a1e5bbee1c2db5b4c1c0972623c9b89d6352e8b55"} Nov 24 09:47:31 crc kubenswrapper[4719]: I1124 09:47:31.383197 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-create-5xnf7" podStartSLOduration=2.383172919 podStartE2EDuration="2.383172919s" podCreationTimestamp="2025-11-24 09:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:31.337392571 +0000 UTC m=+3227.668665823" watchObservedRunningTime="2025-11-24 09:47:31.383172919 +0000 UTC m=+3227.714446171" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.370886 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9d9e3bfc-9c58-4534-89f9-72f35c264a80","Type":"ContainerStarted","Data":"d36b23758344a077392b69b28da21d47a24fab0f4f72190a632dc0db5c062570"} Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.374118 4719 generic.go:334] "Generic (PLEG): container finished" podID="e5144bf8-a3e7-4c00-aca4-c9d0e02bf441" containerID="8881051cc5fc59ebd5f5c1ac7cd147c280c35eb14a3096cee38b57826fac4c57" exitCode=0 Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.374189 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-ae80-account-create-m6k9l" event={"ID":"e5144bf8-a3e7-4c00-aca4-c9d0e02bf441","Type":"ContainerDied","Data":"8881051cc5fc59ebd5f5c1ac7cd147c280c35eb14a3096cee38b57826fac4c57"} Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.376441 4719 generic.go:334] "Generic (PLEG): container finished" podID="fef8c035-164f-4eab-9e45-70e0bdd48b10" containerID="e708192b8302e957628273cc52bff9da6b4101b1e6e1e796fdf9a9b5fe3539c5" exitCode=0 Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.376480 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-5xnf7" event={"ID":"fef8c035-164f-4eab-9e45-70e0bdd48b10","Type":"ContainerDied","Data":"e708192b8302e957628273cc52bff9da6b4101b1e6e1e796fdf9a9b5fe3539c5"} Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.380173 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d3162dd7-a503-44f7-a1e9-8d617948d14a","Type":"ContainerStarted","Data":"96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e"} Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.387963 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9","Type":"ContainerStarted","Data":"3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646"} Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.485566 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-678d5454cc-t98tb"] Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.509332 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-856dd4c45d-ncmv5"] Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.511301 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.513596 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.572868 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-856dd4c45d-ncmv5"] Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.656350 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88a1a623-2d79-4cf4-ab09-544510edc8f5-scripts\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.656425 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a1a623-2d79-4cf4-ab09-544510edc8f5-logs\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.656621 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-combined-ca-bundle\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.656815 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-horizon-secret-key\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.656872 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88a1a623-2d79-4cf4-ab09-544510edc8f5-config-data\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.656893 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxk7d\" (UniqueName: \"kubernetes.io/projected/88a1a623-2d79-4cf4-ab09-544510edc8f5-kube-api-access-xxk7d\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.656920 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-horizon-tls-certs\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.673834 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9f56fdb97-g5shh"] Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.711244 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5f6b7744d-ql24k"] Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.712877 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.723895 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f6b7744d-ql24k"] Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.758893 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-combined-ca-bundle\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.759006 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-horizon-secret-key\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.759050 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88a1a623-2d79-4cf4-ab09-544510edc8f5-config-data\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.759069 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxk7d\" (UniqueName: \"kubernetes.io/projected/88a1a623-2d79-4cf4-ab09-544510edc8f5-kube-api-access-xxk7d\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.759088 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-horizon-tls-certs\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.759161 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88a1a623-2d79-4cf4-ab09-544510edc8f5-scripts\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.759187 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a1a623-2d79-4cf4-ab09-544510edc8f5-logs\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.760377 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a1a623-2d79-4cf4-ab09-544510edc8f5-logs\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.761126 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88a1a623-2d79-4cf4-ab09-544510edc8f5-scripts\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.763339 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88a1a623-2d79-4cf4-ab09-544510edc8f5-config-data\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.766737 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-horizon-tls-certs\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.767245 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-combined-ca-bundle\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.772798 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-horizon-secret-key\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.775768 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxk7d\" (UniqueName: \"kubernetes.io/projected/88a1a623-2d79-4cf4-ab09-544510edc8f5-kube-api-access-xxk7d\") pod \"horizon-856dd4c45d-ncmv5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.873895 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/494049ce-0355-420c-9d3b-774f7befb12a-config-data\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.874265 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/494049ce-0355-420c-9d3b-774f7befb12a-horizon-secret-key\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.874369 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494049ce-0355-420c-9d3b-774f7befb12a-combined-ca-bundle\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.874457 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/494049ce-0355-420c-9d3b-774f7befb12a-scripts\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.881084 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/494049ce-0355-420c-9d3b-774f7befb12a-horizon-tls-certs\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.881272 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/494049ce-0355-420c-9d3b-774f7befb12a-logs\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.881557 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49x9h\" (UniqueName: \"kubernetes.io/projected/494049ce-0355-420c-9d3b-774f7befb12a-kube-api-access-49x9h\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.991618 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/494049ce-0355-420c-9d3b-774f7befb12a-config-data\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.991697 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/494049ce-0355-420c-9d3b-774f7befb12a-horizon-secret-key\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.991875 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494049ce-0355-420c-9d3b-774f7befb12a-combined-ca-bundle\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.991921 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/494049ce-0355-420c-9d3b-774f7befb12a-scripts\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.992003 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/494049ce-0355-420c-9d3b-774f7befb12a-horizon-tls-certs\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.992055 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/494049ce-0355-420c-9d3b-774f7befb12a-logs\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.992090 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49x9h\" (UniqueName: \"kubernetes.io/projected/494049ce-0355-420c-9d3b-774f7befb12a-kube-api-access-49x9h\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:32 crc kubenswrapper[4719]: I1124 09:47:32.993446 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/494049ce-0355-420c-9d3b-774f7befb12a-config-data\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:33 crc kubenswrapper[4719]: I1124 09:47:33.001129 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/494049ce-0355-420c-9d3b-774f7befb12a-logs\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:33 crc kubenswrapper[4719]: I1124 09:47:33.006219 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/494049ce-0355-420c-9d3b-774f7befb12a-horizon-secret-key\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:33 crc kubenswrapper[4719]: I1124 09:47:33.011316 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/494049ce-0355-420c-9d3b-774f7befb12a-scripts\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:33 crc kubenswrapper[4719]: I1124 09:47:33.015526 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494049ce-0355-420c-9d3b-774f7befb12a-combined-ca-bundle\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:33 crc kubenswrapper[4719]: I1124 09:47:33.015740 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/494049ce-0355-420c-9d3b-774f7befb12a-horizon-tls-certs\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:33 crc kubenswrapper[4719]: I1124 09:47:33.020307 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:33 crc kubenswrapper[4719]: I1124 09:47:33.026416 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49x9h\" (UniqueName: \"kubernetes.io/projected/494049ce-0355-420c-9d3b-774f7befb12a-kube-api-access-49x9h\") pod \"horizon-5f6b7744d-ql24k\" (UID: \"494049ce-0355-420c-9d3b-774f7befb12a\") " pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:33 crc kubenswrapper[4719]: I1124 09:47:33.060450 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:33 crc kubenswrapper[4719]: I1124 09:47:33.426752 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9d9e3bfc-9c58-4534-89f9-72f35c264a80","Type":"ContainerStarted","Data":"1e97614a2bd8bd5b635be016830d3193a32de7e7f928f6582155567dc554e2e4"} Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.100844 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-5xnf7" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.138260 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fef8c035-164f-4eab-9e45-70e0bdd48b10-operator-scripts\") pod \"fef8c035-164f-4eab-9e45-70e0bdd48b10\" (UID: \"fef8c035-164f-4eab-9e45-70e0bdd48b10\") " Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.138344 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9ffj\" (UniqueName: \"kubernetes.io/projected/fef8c035-164f-4eab-9e45-70e0bdd48b10-kube-api-access-g9ffj\") pod \"fef8c035-164f-4eab-9e45-70e0bdd48b10\" (UID: \"fef8c035-164f-4eab-9e45-70e0bdd48b10\") " Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.139843 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fef8c035-164f-4eab-9e45-70e0bdd48b10-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fef8c035-164f-4eab-9e45-70e0bdd48b10" (UID: "fef8c035-164f-4eab-9e45-70e0bdd48b10"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.153847 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef8c035-164f-4eab-9e45-70e0bdd48b10-kube-api-access-g9ffj" (OuterVolumeSpecName: "kube-api-access-g9ffj") pod "fef8c035-164f-4eab-9e45-70e0bdd48b10" (UID: "fef8c035-164f-4eab-9e45-70e0bdd48b10"). InnerVolumeSpecName "kube-api-access-g9ffj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.214908 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=5.150466003 podStartE2EDuration="7.214846324s" podCreationTimestamp="2025-11-24 09:47:27 +0000 UTC" firstStartedPulling="2025-11-24 09:47:29.29227654 +0000 UTC m=+3225.623549792" lastFinishedPulling="2025-11-24 09:47:31.356656861 +0000 UTC m=+3227.687930113" observedRunningTime="2025-11-24 09:47:33.461305975 +0000 UTC m=+3229.792579257" watchObservedRunningTime="2025-11-24 09:47:34.214846324 +0000 UTC m=+3230.546119576" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.253495 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fef8c035-164f-4eab-9e45-70e0bdd48b10-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.253531 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9ffj\" (UniqueName: \"kubernetes.io/projected/fef8c035-164f-4eab-9e45-70e0bdd48b10-kube-api-access-g9ffj\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.269668 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f6b7744d-ql24k"] Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.289880 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-856dd4c45d-ncmv5"] Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.413559 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-ae80-account-create-m6k9l" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.457220 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5gv9\" (UniqueName: \"kubernetes.io/projected/e5144bf8-a3e7-4c00-aca4-c9d0e02bf441-kube-api-access-t5gv9\") pod \"e5144bf8-a3e7-4c00-aca4-c9d0e02bf441\" (UID: \"e5144bf8-a3e7-4c00-aca4-c9d0e02bf441\") " Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.457452 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5144bf8-a3e7-4c00-aca4-c9d0e02bf441-operator-scripts\") pod \"e5144bf8-a3e7-4c00-aca4-c9d0e02bf441\" (UID: \"e5144bf8-a3e7-4c00-aca4-c9d0e02bf441\") " Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.458570 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5144bf8-a3e7-4c00-aca4-c9d0e02bf441-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5144bf8-a3e7-4c00-aca4-c9d0e02bf441" (UID: "e5144bf8-a3e7-4c00-aca4-c9d0e02bf441"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.482659 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5144bf8-a3e7-4c00-aca4-c9d0e02bf441-kube-api-access-t5gv9" (OuterVolumeSpecName: "kube-api-access-t5gv9") pod "e5144bf8-a3e7-4c00-aca4-c9d0e02bf441" (UID: "e5144bf8-a3e7-4c00-aca4-c9d0e02bf441"). InnerVolumeSpecName "kube-api-access-t5gv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.491986 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-856dd4c45d-ncmv5" event={"ID":"88a1a623-2d79-4cf4-ab09-544510edc8f5","Type":"ContainerStarted","Data":"17079676cd9fe0f71ce9c33e35f942076a7b168c6642e0bbff102ee697ed61d0"} Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.500523 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9","Type":"ContainerStarted","Data":"84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8"} Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.500675 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" containerName="glance-log" containerID="cri-o://3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646" gracePeriod=30 Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.502552 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" containerName="glance-httpd" containerID="cri-o://84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8" gracePeriod=30 Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.510441 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-ae80-account-create-m6k9l" event={"ID":"e5144bf8-a3e7-4c00-aca4-c9d0e02bf441","Type":"ContainerDied","Data":"05071b61ffd9f9b752c24823d2aed4ca6e58bfdd02f09f1b1f8c0500a92f5c76"} Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.510477 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05071b61ffd9f9b752c24823d2aed4ca6e58bfdd02f09f1b1f8c0500a92f5c76" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.510532 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-ae80-account-create-m6k9l" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.542016 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.54199613 podStartE2EDuration="7.54199613s" podCreationTimestamp="2025-11-24 09:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:34.522548605 +0000 UTC m=+3230.853821867" watchObservedRunningTime="2025-11-24 09:47:34.54199613 +0000 UTC m=+3230.873269382" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.561376 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5gv9\" (UniqueName: \"kubernetes.io/projected/e5144bf8-a3e7-4c00-aca4-c9d0e02bf441-kube-api-access-t5gv9\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.561422 4719 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5144bf8-a3e7-4c00-aca4-c9d0e02bf441-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.562844 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-5xnf7" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.563667 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"82bfb246-8a64-46b7-9223-f2158b114186","Type":"ContainerStarted","Data":"551713c186a255cafc9cb415fa84508954567cf59a5c0c86ff73c9e68de1ff9c"} Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.566391 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-5xnf7" event={"ID":"fef8c035-164f-4eab-9e45-70e0bdd48b10","Type":"ContainerDied","Data":"00c4f73cd5be5b0386a987717252d88342bc4396b4c317032b3a68cc40163681"} Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.575961 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00c4f73cd5be5b0386a987717252d88342bc4396b4c317032b3a68cc40163681" Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.576400 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f6b7744d-ql24k" event={"ID":"494049ce-0355-420c-9d3b-774f7befb12a","Type":"ContainerStarted","Data":"b59d0c6d653deb397d71c591da533586ad2293b6919a6ebcafd527dbd3580743"} Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.577239 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d3162dd7-a503-44f7-a1e9-8d617948d14a","Type":"ContainerStarted","Data":"c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d"} Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.577408 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d3162dd7-a503-44f7-a1e9-8d617948d14a" containerName="glance-log" containerID="cri-o://96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e" gracePeriod=30 Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.577438 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d3162dd7-a503-44f7-a1e9-8d617948d14a" containerName="glance-httpd" containerID="cri-o://c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d" gracePeriod=30 Nov 24 09:47:34 crc kubenswrapper[4719]: I1124 09:47:34.736898 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.736880209 podStartE2EDuration="7.736880209s" podCreationTimestamp="2025-11-24 09:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:34.729415755 +0000 UTC m=+3231.060689027" watchObservedRunningTime="2025-11-24 09:47:34.736880209 +0000 UTC m=+3231.068153461" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.329396 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.395882 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-config-data\") pod \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.396599 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-logs\") pod \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.396638 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.396671 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-httpd-run\") pod \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.396707 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-combined-ca-bundle\") pod \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.396787 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-internal-tls-certs\") pod \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.396875 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-scripts\") pod \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.396910 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5nns\" (UniqueName: \"kubernetes.io/projected/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-kube-api-access-t5nns\") pod \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.396976 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-ceph\") pod \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\" (UID: \"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.418483 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" (UID: "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.418983 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-logs" (OuterVolumeSpecName: "logs") pod "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" (UID: "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.439486 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" (UID: "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.439972 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-scripts" (OuterVolumeSpecName: "scripts") pod "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" (UID: "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.441111 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-ceph" (OuterVolumeSpecName: "ceph") pod "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" (UID: "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.446147 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-kube-api-access-t5nns" (OuterVolumeSpecName: "kube-api-access-t5nns") pod "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" (UID: "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9"). InnerVolumeSpecName "kube-api-access-t5nns". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.524689 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.524725 4719 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.524749 4719 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.524763 4719 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.524774 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.524787 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5nns\" (UniqueName: \"kubernetes.io/projected/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-kube-api-access-t5nns\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.527934 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:47:35 crc kubenswrapper[4719]: E1124 09:47:35.528326 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.598899 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.642661 4719 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.661949 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" (UID: "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.673620 4719 generic.go:334] "Generic (PLEG): container finished" podID="fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" containerID="84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8" exitCode=143 Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.673657 4719 generic.go:334] "Generic (PLEG): container finished" podID="fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" containerID="3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646" exitCode=143 Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.673822 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.674825 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9","Type":"ContainerDied","Data":"84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8"} Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.674854 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9","Type":"ContainerDied","Data":"3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646"} Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.674864 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9","Type":"ContainerDied","Data":"1cf56c57e8a98520e588295f03a2f567d6ada6c83bde5e8d261501737e3bb47c"} Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.674932 4719 scope.go:117] "RemoveContainer" containerID="84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.685475 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" (UID: "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.705213 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"82bfb246-8a64-46b7-9223-f2158b114186","Type":"ContainerStarted","Data":"0e1c27c9e66ef5097df1a74d86044d3aecb211ba74243900dafde55f0ea0c579"} Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.723668 4719 generic.go:334] "Generic (PLEG): container finished" podID="d3162dd7-a503-44f7-a1e9-8d617948d14a" containerID="c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d" exitCode=143 Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.723702 4719 generic.go:334] "Generic (PLEG): container finished" podID="d3162dd7-a503-44f7-a1e9-8d617948d14a" containerID="96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e" exitCode=143 Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.723738 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d3162dd7-a503-44f7-a1e9-8d617948d14a","Type":"ContainerDied","Data":"c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d"} Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.723767 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d3162dd7-a503-44f7-a1e9-8d617948d14a","Type":"ContainerDied","Data":"96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e"} Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.723779 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d3162dd7-a503-44f7-a1e9-8d617948d14a","Type":"ContainerDied","Data":"24bf33906a59d914bc2dd2d29246c76ef6360cfe1529d3989e40c27b744c8f48"} Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.723856 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.751076 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d3162dd7-a503-44f7-a1e9-8d617948d14a-ceph\") pod \"d3162dd7-a503-44f7-a1e9-8d617948d14a\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.751196 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3162dd7-a503-44f7-a1e9-8d617948d14a-logs\") pod \"d3162dd7-a503-44f7-a1e9-8d617948d14a\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.751235 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-public-tls-certs\") pod \"d3162dd7-a503-44f7-a1e9-8d617948d14a\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.751346 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"d3162dd7-a503-44f7-a1e9-8d617948d14a\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.751415 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-combined-ca-bundle\") pod \"d3162dd7-a503-44f7-a1e9-8d617948d14a\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.751457 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-config-data\") pod \"d3162dd7-a503-44f7-a1e9-8d617948d14a\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.751573 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3162dd7-a503-44f7-a1e9-8d617948d14a-httpd-run\") pod \"d3162dd7-a503-44f7-a1e9-8d617948d14a\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.751596 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-scripts\") pod \"d3162dd7-a503-44f7-a1e9-8d617948d14a\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.751646 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8rwf\" (UniqueName: \"kubernetes.io/projected/d3162dd7-a503-44f7-a1e9-8d617948d14a-kube-api-access-d8rwf\") pod \"d3162dd7-a503-44f7-a1e9-8d617948d14a\" (UID: \"d3162dd7-a503-44f7-a1e9-8d617948d14a\") " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.752196 4719 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.752211 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.752219 4719 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.753249 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3162dd7-a503-44f7-a1e9-8d617948d14a-logs" (OuterVolumeSpecName: "logs") pod "d3162dd7-a503-44f7-a1e9-8d617948d14a" (UID: "d3162dd7-a503-44f7-a1e9-8d617948d14a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.774672 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "d3162dd7-a503-44f7-a1e9-8d617948d14a" (UID: "d3162dd7-a503-44f7-a1e9-8d617948d14a"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.787612 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3162dd7-a503-44f7-a1e9-8d617948d14a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d3162dd7-a503-44f7-a1e9-8d617948d14a" (UID: "d3162dd7-a503-44f7-a1e9-8d617948d14a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.793221 4719 scope.go:117] "RemoveContainer" containerID="3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.798304 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3162dd7-a503-44f7-a1e9-8d617948d14a-kube-api-access-d8rwf" (OuterVolumeSpecName: "kube-api-access-d8rwf") pod "d3162dd7-a503-44f7-a1e9-8d617948d14a" (UID: "d3162dd7-a503-44f7-a1e9-8d617948d14a"). InnerVolumeSpecName "kube-api-access-d8rwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.798430 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3162dd7-a503-44f7-a1e9-8d617948d14a-ceph" (OuterVolumeSpecName: "ceph") pod "d3162dd7-a503-44f7-a1e9-8d617948d14a" (UID: "d3162dd7-a503-44f7-a1e9-8d617948d14a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.800237 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-config-data" (OuterVolumeSpecName: "config-data") pod "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" (UID: "fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.804878 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=5.47032722 podStartE2EDuration="8.804854192s" podCreationTimestamp="2025-11-24 09:47:27 +0000 UTC" firstStartedPulling="2025-11-24 09:47:29.969610362 +0000 UTC m=+3226.300883614" lastFinishedPulling="2025-11-24 09:47:33.304137334 +0000 UTC m=+3229.635410586" observedRunningTime="2025-11-24 09:47:35.79883925 +0000 UTC m=+3232.130112502" watchObservedRunningTime="2025-11-24 09:47:35.804854192 +0000 UTC m=+3232.136127454" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.808858 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-scripts" (OuterVolumeSpecName: "scripts") pod "d3162dd7-a503-44f7-a1e9-8d617948d14a" (UID: "d3162dd7-a503-44f7-a1e9-8d617948d14a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.856704 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.856745 4719 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3162dd7-a503-44f7-a1e9-8d617948d14a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.856758 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.856769 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8rwf\" (UniqueName: \"kubernetes.io/projected/d3162dd7-a503-44f7-a1e9-8d617948d14a-kube-api-access-d8rwf\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.856782 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d3162dd7-a503-44f7-a1e9-8d617948d14a-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.856792 4719 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3162dd7-a503-44f7-a1e9-8d617948d14a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.856814 4719 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.891297 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3162dd7-a503-44f7-a1e9-8d617948d14a" (UID: "d3162dd7-a503-44f7-a1e9-8d617948d14a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.930071 4719 scope.go:117] "RemoveContainer" containerID="84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.930478 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d3162dd7-a503-44f7-a1e9-8d617948d14a" (UID: "d3162dd7-a503-44f7-a1e9-8d617948d14a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: E1124 09:47:35.931688 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8\": container with ID starting with 84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8 not found: ID does not exist" containerID="84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.931728 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8"} err="failed to get container status \"84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8\": rpc error: code = NotFound desc = could not find container \"84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8\": container with ID starting with 84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8 not found: ID does not exist" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.931752 4719 scope.go:117] "RemoveContainer" containerID="3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.943636 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-config-data" (OuterVolumeSpecName: "config-data") pod "d3162dd7-a503-44f7-a1e9-8d617948d14a" (UID: "d3162dd7-a503-44f7-a1e9-8d617948d14a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:47:35 crc kubenswrapper[4719]: E1124 09:47:35.946616 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646\": container with ID starting with 3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646 not found: ID does not exist" containerID="3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.946651 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646"} err="failed to get container status \"3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646\": rpc error: code = NotFound desc = could not find container \"3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646\": container with ID starting with 3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646 not found: ID does not exist" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.946675 4719 scope.go:117] "RemoveContainer" containerID="84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.951516 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8"} err="failed to get container status \"84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8\": rpc error: code = NotFound desc = could not find container \"84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8\": container with ID starting with 84c9df72d1574048de70f33b171f426c47a7d9898ecb4453a7bb529bb08da8e8 not found: ID does not exist" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.951564 4719 scope.go:117] "RemoveContainer" containerID="3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.952830 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646"} err="failed to get container status \"3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646\": rpc error: code = NotFound desc = could not find container \"3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646\": container with ID starting with 3bba37aa5480ea369420121cf5904657fb44bfe047c6a0bfae206da53d2c0646 not found: ID does not exist" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.952875 4719 scope.go:117] "RemoveContainer" containerID="c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.953672 4719 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.959620 4719 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.959646 4719 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.959658 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:35 crc kubenswrapper[4719]: I1124 09:47:35.959668 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3162dd7-a503-44f7-a1e9-8d617948d14a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.002427 4719 scope.go:117] "RemoveContainer" containerID="96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.024548 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.029609 4719 scope.go:117] "RemoveContainer" containerID="c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d" Nov 24 09:47:36 crc kubenswrapper[4719]: E1124 09:47:36.030954 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d\": container with ID starting with c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d not found: ID does not exist" containerID="c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.030992 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d"} err="failed to get container status \"c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d\": rpc error: code = NotFound desc = could not find container \"c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d\": container with ID starting with c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d not found: ID does not exist" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.031439 4719 scope.go:117] "RemoveContainer" containerID="96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.031965 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:47:36 crc kubenswrapper[4719]: E1124 09:47:36.032305 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e\": container with ID starting with 96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e not found: ID does not exist" containerID="96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.032338 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e"} err="failed to get container status \"96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e\": rpc error: code = NotFound desc = could not find container \"96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e\": container with ID starting with 96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e not found: ID does not exist" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.032549 4719 scope.go:117] "RemoveContainer" containerID="c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.033552 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d"} err="failed to get container status \"c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d\": rpc error: code = NotFound desc = could not find container \"c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d\": container with ID starting with c335a2e73bf27891a46659bca35b35bc7c940e8a645704257e76d15079a6494d not found: ID does not exist" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.033596 4719 scope.go:117] "RemoveContainer" containerID="96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.034011 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e"} err="failed to get container status \"96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e\": rpc error: code = NotFound desc = could not find container \"96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e\": container with ID starting with 96f04ef320b513ea67801656f7c05e6134032fc2501b4293e7c724eda58fb41e not found: ID does not exist" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.069412 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:47:36 crc kubenswrapper[4719]: E1124 09:47:36.070386 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef8c035-164f-4eab-9e45-70e0bdd48b10" containerName="mariadb-database-create" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.070496 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef8c035-164f-4eab-9e45-70e0bdd48b10" containerName="mariadb-database-create" Nov 24 09:47:36 crc kubenswrapper[4719]: E1124 09:47:36.070603 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5144bf8-a3e7-4c00-aca4-c9d0e02bf441" containerName="mariadb-account-create" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.070680 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5144bf8-a3e7-4c00-aca4-c9d0e02bf441" containerName="mariadb-account-create" Nov 24 09:47:36 crc kubenswrapper[4719]: E1124 09:47:36.070772 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" containerName="glance-log" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.070851 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" containerName="glance-log" Nov 24 09:47:36 crc kubenswrapper[4719]: E1124 09:47:36.070935 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" containerName="glance-httpd" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.071069 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" containerName="glance-httpd" Nov 24 09:47:36 crc kubenswrapper[4719]: E1124 09:47:36.071179 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3162dd7-a503-44f7-a1e9-8d617948d14a" containerName="glance-log" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.071269 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3162dd7-a503-44f7-a1e9-8d617948d14a" containerName="glance-log" Nov 24 09:47:36 crc kubenswrapper[4719]: E1124 09:47:36.071377 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3162dd7-a503-44f7-a1e9-8d617948d14a" containerName="glance-httpd" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.071460 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3162dd7-a503-44f7-a1e9-8d617948d14a" containerName="glance-httpd" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.071754 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" containerName="glance-httpd" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.071853 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" containerName="glance-log" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.071936 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5144bf8-a3e7-4c00-aca4-c9d0e02bf441" containerName="mariadb-account-create" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.072028 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3162dd7-a503-44f7-a1e9-8d617948d14a" containerName="glance-log" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.072246 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3162dd7-a503-44f7-a1e9-8d617948d14a" containerName="glance-httpd" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.072333 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="fef8c035-164f-4eab-9e45-70e0bdd48b10" containerName="mariadb-database-create" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.073874 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.077311 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vwfrr" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.077483 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.077594 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.077821 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.115302 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.163644 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e745f799-46a2-4fd7-b32d-09a11558070b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.163723 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.163776 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e745f799-46a2-4fd7-b32d-09a11558070b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.163809 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e745f799-46a2-4fd7-b32d-09a11558070b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.163837 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e745f799-46a2-4fd7-b32d-09a11558070b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.163888 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qglxm\" (UniqueName: \"kubernetes.io/projected/e745f799-46a2-4fd7-b32d-09a11558070b-kube-api-access-qglxm\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.163931 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e745f799-46a2-4fd7-b32d-09a11558070b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.163967 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e745f799-46a2-4fd7-b32d-09a11558070b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.163989 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e745f799-46a2-4fd7-b32d-09a11558070b-logs\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.249201 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.266336 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e745f799-46a2-4fd7-b32d-09a11558070b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.266438 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.266496 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e745f799-46a2-4fd7-b32d-09a11558070b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.266526 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e745f799-46a2-4fd7-b32d-09a11558070b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.266559 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e745f799-46a2-4fd7-b32d-09a11558070b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.266615 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qglxm\" (UniqueName: \"kubernetes.io/projected/e745f799-46a2-4fd7-b32d-09a11558070b-kube-api-access-qglxm\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.266647 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e745f799-46a2-4fd7-b32d-09a11558070b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.266695 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e745f799-46a2-4fd7-b32d-09a11558070b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.266715 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e745f799-46a2-4fd7-b32d-09a11558070b-logs\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.267313 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e745f799-46a2-4fd7-b32d-09a11558070b-logs\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.267919 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e745f799-46a2-4fd7-b32d-09a11558070b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.268156 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.275596 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.289537 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e745f799-46a2-4fd7-b32d-09a11558070b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.294738 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e745f799-46a2-4fd7-b32d-09a11558070b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.304837 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e745f799-46a2-4fd7-b32d-09a11558070b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.304836 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qglxm\" (UniqueName: \"kubernetes.io/projected/e745f799-46a2-4fd7-b32d-09a11558070b-kube-api-access-qglxm\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.311667 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e745f799-46a2-4fd7-b32d-09a11558070b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.313754 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.315669 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.315827 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e745f799-46a2-4fd7-b32d-09a11558070b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.317803 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.330509 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.355650 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.369634 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2a5521-1fe8-40c7-af69-18332a312c14-config-data\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.369727 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b2a5521-1fe8-40c7-af69-18332a312c14-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.369761 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b2a5521-1fe8-40c7-af69-18332a312c14-logs\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.369839 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0b2a5521-1fe8-40c7-af69-18332a312c14-ceph\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.369922 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcs6f\" (UniqueName: \"kubernetes.io/projected/0b2a5521-1fe8-40c7-af69-18332a312c14-kube-api-access-jcs6f\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.369945 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2a5521-1fe8-40c7-af69-18332a312c14-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.369975 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.370092 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2a5521-1fe8-40c7-af69-18332a312c14-scripts\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.370153 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b2a5521-1fe8-40c7-af69-18332a312c14-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.418256 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e745f799-46a2-4fd7-b32d-09a11558070b\") " pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.472816 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcs6f\" (UniqueName: \"kubernetes.io/projected/0b2a5521-1fe8-40c7-af69-18332a312c14-kube-api-access-jcs6f\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.472859 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2a5521-1fe8-40c7-af69-18332a312c14-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.472895 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.472985 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2a5521-1fe8-40c7-af69-18332a312c14-scripts\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.473024 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b2a5521-1fe8-40c7-af69-18332a312c14-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.473087 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2a5521-1fe8-40c7-af69-18332a312c14-config-data\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.473161 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b2a5521-1fe8-40c7-af69-18332a312c14-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.473191 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b2a5521-1fe8-40c7-af69-18332a312c14-logs\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.473246 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0b2a5521-1fe8-40c7-af69-18332a312c14-ceph\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.473802 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.480640 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b2a5521-1fe8-40c7-af69-18332a312c14-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.482150 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b2a5521-1fe8-40c7-af69-18332a312c14-logs\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.497521 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2a5521-1fe8-40c7-af69-18332a312c14-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.499099 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b2a5521-1fe8-40c7-af69-18332a312c14-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.499587 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2a5521-1fe8-40c7-af69-18332a312c14-scripts\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.506961 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcs6f\" (UniqueName: \"kubernetes.io/projected/0b2a5521-1fe8-40c7-af69-18332a312c14-kube-api-access-jcs6f\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.508471 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2a5521-1fe8-40c7-af69-18332a312c14-config-data\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.519281 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0b2a5521-1fe8-40c7-af69-18332a312c14-ceph\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.521656 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.557467 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3162dd7-a503-44f7-a1e9-8d617948d14a" path="/var/lib/kubelet/pods/d3162dd7-a503-44f7-a1e9-8d617948d14a/volumes" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.558243 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"0b2a5521-1fe8-40c7-af69-18332a312c14\") " pod="openstack/glance-default-external-api-0" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.558253 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9" path="/var/lib/kubelet/pods/fddc29ba-44c9-4eaf-b7ca-47a5e94f62f9/volumes" Nov 24 09:47:36 crc kubenswrapper[4719]: I1124 09:47:36.821259 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 09:47:37 crc kubenswrapper[4719]: I1124 09:47:37.335220 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 09:47:37 crc kubenswrapper[4719]: I1124 09:47:37.692341 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 09:47:37 crc kubenswrapper[4719]: I1124 09:47:37.821766 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b2a5521-1fe8-40c7-af69-18332a312c14","Type":"ContainerStarted","Data":"eb91f45148c2c0f367fdebecd2d4dca16c301d8fc7b651ee3428b3ed0b8fdc82"} Nov 24 09:47:37 crc kubenswrapper[4719]: I1124 09:47:37.823751 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e745f799-46a2-4fd7-b32d-09a11558070b","Type":"ContainerStarted","Data":"af9b5ceb3f9029d520560bf1adaf52e836de7959009b462c24cb3f4dc3042fea"} Nov 24 09:47:38 crc kubenswrapper[4719]: I1124 09:47:38.202416 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Nov 24 09:47:38 crc kubenswrapper[4719]: I1124 09:47:38.546462 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:38 crc kubenswrapper[4719]: I1124 09:47:38.793562 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="9d9e3bfc-9c58-4534-89f9-72f35c264a80" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 09:47:38 crc kubenswrapper[4719]: I1124 09:47:38.872853 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e745f799-46a2-4fd7-b32d-09a11558070b","Type":"ContainerStarted","Data":"2526b01fdbb390da8a9c0cf9191f43d97125fd3254fb741b0b4278bb8d41f6c6"} Nov 24 09:47:38 crc kubenswrapper[4719]: I1124 09:47:38.878149 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b2a5521-1fe8-40c7-af69-18332a312c14","Type":"ContainerStarted","Data":"c4093169dc276bc399aaa0a537a37a9ff1089b0c792cd20f817846947540b0b8"} Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.547120 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-nv6wn"] Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.548620 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-nv6wn" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.563790 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.564011 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-7q75z" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.568926 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-nv6wn"] Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.594261 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7psx\" (UniqueName: \"kubernetes.io/projected/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-kube-api-access-g7psx\") pod \"manila-db-sync-nv6wn\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " pod="openstack/manila-db-sync-nv6wn" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.594305 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-config-data\") pod \"manila-db-sync-nv6wn\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " pod="openstack/manila-db-sync-nv6wn" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.594355 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-combined-ca-bundle\") pod \"manila-db-sync-nv6wn\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " pod="openstack/manila-db-sync-nv6wn" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.594530 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-job-config-data\") pod \"manila-db-sync-nv6wn\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " pod="openstack/manila-db-sync-nv6wn" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.696446 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7psx\" (UniqueName: \"kubernetes.io/projected/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-kube-api-access-g7psx\") pod \"manila-db-sync-nv6wn\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " pod="openstack/manila-db-sync-nv6wn" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.696497 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-config-data\") pod \"manila-db-sync-nv6wn\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " pod="openstack/manila-db-sync-nv6wn" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.696536 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-combined-ca-bundle\") pod \"manila-db-sync-nv6wn\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " pod="openstack/manila-db-sync-nv6wn" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.696621 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-job-config-data\") pod \"manila-db-sync-nv6wn\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " pod="openstack/manila-db-sync-nv6wn" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.712277 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-config-data\") pod \"manila-db-sync-nv6wn\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " pod="openstack/manila-db-sync-nv6wn" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.720088 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-combined-ca-bundle\") pod \"manila-db-sync-nv6wn\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " pod="openstack/manila-db-sync-nv6wn" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.744873 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-job-config-data\") pod \"manila-db-sync-nv6wn\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " pod="openstack/manila-db-sync-nv6wn" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.807601 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7psx\" (UniqueName: \"kubernetes.io/projected/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-kube-api-access-g7psx\") pod \"manila-db-sync-nv6wn\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " pod="openstack/manila-db-sync-nv6wn" Nov 24 09:47:39 crc kubenswrapper[4719]: I1124 09:47:39.901752 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-nv6wn" Nov 24 09:47:40 crc kubenswrapper[4719]: I1124 09:47:40.954956 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b2a5521-1fe8-40c7-af69-18332a312c14","Type":"ContainerStarted","Data":"c2480e402eeddd41fb921042ceef8c65e15f4b05c19cffc438af9fd59765f24d"} Nov 24 09:47:40 crc kubenswrapper[4719]: I1124 09:47:40.970200 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e745f799-46a2-4fd7-b32d-09a11558070b","Type":"ContainerStarted","Data":"8dc1dcba10a4b3b67046851e8730aa7f2e137080ff913f5d4410ec71190b1f56"} Nov 24 09:47:41 crc kubenswrapper[4719]: I1124 09:47:41.008785 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.008762645 podStartE2EDuration="5.008762645s" podCreationTimestamp="2025-11-24 09:47:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:40.992184981 +0000 UTC m=+3237.323458243" watchObservedRunningTime="2025-11-24 09:47:41.008762645 +0000 UTC m=+3237.340035897" Nov 24 09:47:41 crc kubenswrapper[4719]: I1124 09:47:41.029651 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-nv6wn"] Nov 24 09:47:41 crc kubenswrapper[4719]: I1124 09:47:41.030761 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.030741193 podStartE2EDuration="5.030741193s" podCreationTimestamp="2025-11-24 09:47:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:47:41.02152458 +0000 UTC m=+3237.352797842" watchObservedRunningTime="2025-11-24 09:47:41.030741193 +0000 UTC m=+3237.362014445" Nov 24 09:47:43 crc kubenswrapper[4719]: I1124 09:47:43.214468 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Nov 24 09:47:43 crc kubenswrapper[4719]: I1124 09:47:43.852239 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Nov 24 09:47:46 crc kubenswrapper[4719]: I1124 09:47:46.531641 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:46 crc kubenswrapper[4719]: I1124 09:47:46.532179 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:46 crc kubenswrapper[4719]: I1124 09:47:46.560472 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:46 crc kubenswrapper[4719]: I1124 09:47:46.566397 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:46 crc kubenswrapper[4719]: I1124 09:47:46.822272 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 09:47:46 crc kubenswrapper[4719]: I1124 09:47:46.822314 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 09:47:46 crc kubenswrapper[4719]: I1124 09:47:46.853254 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 09:47:46 crc kubenswrapper[4719]: I1124 09:47:46.874483 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 09:47:47 crc kubenswrapper[4719]: I1124 09:47:47.049140 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-nv6wn" event={"ID":"1e8128a6-20cf-4abd-a677-fc1d0f61fd23","Type":"ContainerStarted","Data":"204cc4dc0b9f7abe16127b527d594dceda4f977aa7dd25185dd4c46f65d7493f"} Nov 24 09:47:47 crc kubenswrapper[4719]: I1124 09:47:47.049287 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:47 crc kubenswrapper[4719]: I1124 09:47:47.049303 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:47 crc kubenswrapper[4719]: I1124 09:47:47.049541 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 09:47:47 crc kubenswrapper[4719]: I1124 09:47:47.049647 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 09:47:48 crc kubenswrapper[4719]: I1124 09:47:48.060356 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-856dd4c45d-ncmv5" event={"ID":"88a1a623-2d79-4cf4-ab09-544510edc8f5","Type":"ContainerStarted","Data":"f52e94cdbb283ee04dc8651e8114525801f47379e23293c15914887791892c4c"} Nov 24 09:47:48 crc kubenswrapper[4719]: I1124 09:47:48.063727 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9f56fdb97-g5shh" event={"ID":"b4a2a599-ea1c-4571-8dbe-afd67c313647","Type":"ContainerStarted","Data":"92cee3f0cf9d3457fe2ba5a2a21f73cd2c0002d11adaed7bfb3bf1ce97eea47f"} Nov 24 09:47:48 crc kubenswrapper[4719]: I1124 09:47:48.063827 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9f56fdb97-g5shh" event={"ID":"b4a2a599-ea1c-4571-8dbe-afd67c313647","Type":"ContainerStarted","Data":"e2e4a59f1150967a88ca6e644745b064eaf21723b4a265e06249691e1bbc90c9"} Nov 24 09:47:48 crc kubenswrapper[4719]: I1124 09:47:48.064018 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9f56fdb97-g5shh" podUID="b4a2a599-ea1c-4571-8dbe-afd67c313647" containerName="horizon-log" containerID="cri-o://e2e4a59f1150967a88ca6e644745b064eaf21723b4a265e06249691e1bbc90c9" gracePeriod=30 Nov 24 09:47:48 crc kubenswrapper[4719]: I1124 09:47:48.064599 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9f56fdb97-g5shh" podUID="b4a2a599-ea1c-4571-8dbe-afd67c313647" containerName="horizon" containerID="cri-o://92cee3f0cf9d3457fe2ba5a2a21f73cd2c0002d11adaed7bfb3bf1ce97eea47f" gracePeriod=30 Nov 24 09:47:48 crc kubenswrapper[4719]: I1124 09:47:48.070014 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-678d5454cc-t98tb" event={"ID":"a1c2d07f-677d-422e-a815-68ab2298cc39","Type":"ContainerStarted","Data":"be89360286ac743253c4d35f6acdf293ef676ed0e5e71c7a07f27c16c2470b29"} Nov 24 09:47:48 crc kubenswrapper[4719]: I1124 09:47:48.070075 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-678d5454cc-t98tb" event={"ID":"a1c2d07f-677d-422e-a815-68ab2298cc39","Type":"ContainerStarted","Data":"63cb288d575e5a7bfd4e98bf1b25910f8f6d7c8aee7964281c2876e89d964c26"} Nov 24 09:47:48 crc kubenswrapper[4719]: I1124 09:47:48.070355 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-678d5454cc-t98tb" podUID="a1c2d07f-677d-422e-a815-68ab2298cc39" containerName="horizon" containerID="cri-o://be89360286ac743253c4d35f6acdf293ef676ed0e5e71c7a07f27c16c2470b29" gracePeriod=30 Nov 24 09:47:48 crc kubenswrapper[4719]: I1124 09:47:48.070383 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-678d5454cc-t98tb" podUID="a1c2d07f-677d-422e-a815-68ab2298cc39" containerName="horizon-log" containerID="cri-o://63cb288d575e5a7bfd4e98bf1b25910f8f6d7c8aee7964281c2876e89d964c26" gracePeriod=30 Nov 24 09:47:48 crc kubenswrapper[4719]: I1124 09:47:48.079195 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f6b7744d-ql24k" event={"ID":"494049ce-0355-420c-9d3b-774f7befb12a","Type":"ContainerStarted","Data":"164ef5f877a15bcdcca5a500241698e46500917551b136ad8a32ccae3285fb9a"} Nov 24 09:47:48 crc kubenswrapper[4719]: I1124 09:47:48.079245 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f6b7744d-ql24k" event={"ID":"494049ce-0355-420c-9d3b-774f7befb12a","Type":"ContainerStarted","Data":"7faef8dbd1b106e692629096c6d65fb1f62376422a874ce6bce7e270b728886a"} Nov 24 09:47:48 crc kubenswrapper[4719]: I1124 09:47:48.147718 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-678d5454cc-t98tb" podStartSLOduration=2.938498095 podStartE2EDuration="19.147699114s" podCreationTimestamp="2025-11-24 09:47:29 +0000 UTC" firstStartedPulling="2025-11-24 09:47:30.895410423 +0000 UTC m=+3227.226683675" lastFinishedPulling="2025-11-24 09:47:47.104611442 +0000 UTC m=+3243.435884694" observedRunningTime="2025-11-24 09:47:48.145263865 +0000 UTC m=+3244.476537137" watchObservedRunningTime="2025-11-24 09:47:48.147699114 +0000 UTC m=+3244.478972386" Nov 24 09:47:48 crc kubenswrapper[4719]: I1124 09:47:48.157804 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-9f56fdb97-g5shh" podStartSLOduration=2.709027009 podStartE2EDuration="19.157785632s" podCreationTimestamp="2025-11-24 09:47:29 +0000 UTC" firstStartedPulling="2025-11-24 09:47:30.720465645 +0000 UTC m=+3227.051738897" lastFinishedPulling="2025-11-24 09:47:47.169224268 +0000 UTC m=+3243.500497520" observedRunningTime="2025-11-24 09:47:48.110150311 +0000 UTC m=+3244.441423583" watchObservedRunningTime="2025-11-24 09:47:48.157785632 +0000 UTC m=+3244.489058894" Nov 24 09:47:48 crc kubenswrapper[4719]: I1124 09:47:48.230164 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5f6b7744d-ql24k" podStartSLOduration=3.475782521 podStartE2EDuration="16.23013659s" podCreationTimestamp="2025-11-24 09:47:32 +0000 UTC" firstStartedPulling="2025-11-24 09:47:34.349778089 +0000 UTC m=+3230.681051341" lastFinishedPulling="2025-11-24 09:47:47.104132158 +0000 UTC m=+3243.435405410" observedRunningTime="2025-11-24 09:47:48.202579032 +0000 UTC m=+3244.533852304" watchObservedRunningTime="2025-11-24 09:47:48.23013659 +0000 UTC m=+3244.561409852" Nov 24 09:47:49 crc kubenswrapper[4719]: I1124 09:47:49.092194 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-856dd4c45d-ncmv5" event={"ID":"88a1a623-2d79-4cf4-ab09-544510edc8f5","Type":"ContainerStarted","Data":"32b40757ee333fd3df11398d7e533f2c533c1860a5d953055c40474c63446049"} Nov 24 09:47:49 crc kubenswrapper[4719]: I1124 09:47:49.850669 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:47:49 crc kubenswrapper[4719]: I1124 09:47:49.891661 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:47:50 crc kubenswrapper[4719]: I1124 09:47:50.525080 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:47:50 crc kubenswrapper[4719]: E1124 09:47:50.525348 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:47:52 crc kubenswrapper[4719]: I1124 09:47:52.085094 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 09:47:52 crc kubenswrapper[4719]: I1124 09:47:52.085413 4719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 09:47:52 crc kubenswrapper[4719]: I1124 09:47:52.093686 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:52 crc kubenswrapper[4719]: I1124 09:47:52.093808 4719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 09:47:52 crc kubenswrapper[4719]: I1124 09:47:52.095448 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 09:47:52 crc kubenswrapper[4719]: I1124 09:47:52.096765 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 09:47:52 crc kubenswrapper[4719]: I1124 09:47:52.107826 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-856dd4c45d-ncmv5" podStartSLOduration=7.341768647 podStartE2EDuration="20.10781188s" podCreationTimestamp="2025-11-24 09:47:32 +0000 UTC" firstStartedPulling="2025-11-24 09:47:34.380429005 +0000 UTC m=+3230.711702257" lastFinishedPulling="2025-11-24 09:47:47.146472238 +0000 UTC m=+3243.477745490" observedRunningTime="2025-11-24 09:47:49.127252271 +0000 UTC m=+3245.458525533" watchObservedRunningTime="2025-11-24 09:47:52.10781188 +0000 UTC m=+3248.439085132" Nov 24 09:47:53 crc kubenswrapper[4719]: I1124 09:47:53.025182 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:53 crc kubenswrapper[4719]: I1124 09:47:53.025469 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:47:53 crc kubenswrapper[4719]: I1124 09:47:53.061417 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:47:53 crc kubenswrapper[4719]: I1124 09:47:53.061464 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:48:00 crc kubenswrapper[4719]: I1124 09:48:00.220638 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-nv6wn" event={"ID":"1e8128a6-20cf-4abd-a677-fc1d0f61fd23","Type":"ContainerStarted","Data":"306cd7df2dd46e45722c7f6c6ddde4d023189804166a5f0d10db2ed3f923896d"} Nov 24 09:48:00 crc kubenswrapper[4719]: I1124 09:48:00.252021 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-nv6wn" podStartSLOduration=8.87282178 podStartE2EDuration="21.252000111s" podCreationTimestamp="2025-11-24 09:47:39 +0000 UTC" firstStartedPulling="2025-11-24 09:47:46.988729651 +0000 UTC m=+3243.320002903" lastFinishedPulling="2025-11-24 09:47:59.367907982 +0000 UTC m=+3255.699181234" observedRunningTime="2025-11-24 09:48:00.242377446 +0000 UTC m=+3256.573650698" watchObservedRunningTime="2025-11-24 09:48:00.252000111 +0000 UTC m=+3256.583273363" Nov 24 09:48:01 crc kubenswrapper[4719]: I1124 09:48:01.520860 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:48:01 crc kubenswrapper[4719]: E1124 09:48:01.521362 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:48:03 crc kubenswrapper[4719]: I1124 09:48:03.023689 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-856dd4c45d-ncmv5" podUID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Nov 24 09:48:03 crc kubenswrapper[4719]: I1124 09:48:03.063077 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5f6b7744d-ql24k" podUID="494049ce-0355-420c-9d3b-774f7befb12a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Nov 24 09:48:13 crc kubenswrapper[4719]: I1124 09:48:13.023547 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-856dd4c45d-ncmv5" podUID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Nov 24 09:48:13 crc kubenswrapper[4719]: I1124 09:48:13.062266 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5f6b7744d-ql24k" podUID="494049ce-0355-420c-9d3b-774f7befb12a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.244:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.244:8443: connect: connection refused" Nov 24 09:48:15 crc kubenswrapper[4719]: I1124 09:48:15.521021 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:48:15 crc kubenswrapper[4719]: E1124 09:48:15.522412 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:48:19 crc kubenswrapper[4719]: I1124 09:48:19.379057 4719 generic.go:334] "Generic (PLEG): container finished" podID="a1c2d07f-677d-422e-a815-68ab2298cc39" containerID="be89360286ac743253c4d35f6acdf293ef676ed0e5e71c7a07f27c16c2470b29" exitCode=137 Nov 24 09:48:19 crc kubenswrapper[4719]: I1124 09:48:19.379664 4719 generic.go:334] "Generic (PLEG): container finished" podID="a1c2d07f-677d-422e-a815-68ab2298cc39" containerID="63cb288d575e5a7bfd4e98bf1b25910f8f6d7c8aee7964281c2876e89d964c26" exitCode=137 Nov 24 09:48:19 crc kubenswrapper[4719]: I1124 09:48:19.379083 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-678d5454cc-t98tb" event={"ID":"a1c2d07f-677d-422e-a815-68ab2298cc39","Type":"ContainerDied","Data":"be89360286ac743253c4d35f6acdf293ef676ed0e5e71c7a07f27c16c2470b29"} Nov 24 09:48:19 crc kubenswrapper[4719]: I1124 09:48:19.379720 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-678d5454cc-t98tb" event={"ID":"a1c2d07f-677d-422e-a815-68ab2298cc39","Type":"ContainerDied","Data":"63cb288d575e5a7bfd4e98bf1b25910f8f6d7c8aee7964281c2876e89d964c26"} Nov 24 09:48:19 crc kubenswrapper[4719]: I1124 09:48:19.382514 4719 generic.go:334] "Generic (PLEG): container finished" podID="b4a2a599-ea1c-4571-8dbe-afd67c313647" containerID="e2e4a59f1150967a88ca6e644745b064eaf21723b4a265e06249691e1bbc90c9" exitCode=137 Nov 24 09:48:19 crc kubenswrapper[4719]: I1124 09:48:19.382542 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9f56fdb97-g5shh" event={"ID":"b4a2a599-ea1c-4571-8dbe-afd67c313647","Type":"ContainerDied","Data":"e2e4a59f1150967a88ca6e644745b064eaf21723b4a265e06249691e1bbc90c9"} Nov 24 09:48:20 crc kubenswrapper[4719]: I1124 09:48:20.395147 4719 generic.go:334] "Generic (PLEG): container finished" podID="b4a2a599-ea1c-4571-8dbe-afd67c313647" containerID="92cee3f0cf9d3457fe2ba5a2a21f73cd2c0002d11adaed7bfb3bf1ce97eea47f" exitCode=137 Nov 24 09:48:20 crc kubenswrapper[4719]: I1124 09:48:20.395185 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9f56fdb97-g5shh" event={"ID":"b4a2a599-ea1c-4571-8dbe-afd67c313647","Type":"ContainerDied","Data":"92cee3f0cf9d3457fe2ba5a2a21f73cd2c0002d11adaed7bfb3bf1ce97eea47f"} Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.407123 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-678d5454cc-t98tb" event={"ID":"a1c2d07f-677d-422e-a815-68ab2298cc39","Type":"ContainerDied","Data":"28da3b1e5c2492ef9bd76a1a1e5bbee1c2db5b4c1c0972623c9b89d6352e8b55"} Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.407363 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28da3b1e5c2492ef9bd76a1a1e5bbee1c2db5b4c1c0972623c9b89d6352e8b55" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.412184 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9f56fdb97-g5shh" event={"ID":"b4a2a599-ea1c-4571-8dbe-afd67c313647","Type":"ContainerDied","Data":"91c61dd03e658261a3c0efab65fb0f9cd66e50d6782c6aed4d06aa0f9f82323e"} Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.412247 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91c61dd03e658261a3c0efab65fb0f9cd66e50d6782c6aed4d06aa0f9f82323e" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.445243 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.451493 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.619650 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b4a2a599-ea1c-4571-8dbe-afd67c313647-horizon-secret-key\") pod \"b4a2a599-ea1c-4571-8dbe-afd67c313647\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.619784 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd7sl\" (UniqueName: \"kubernetes.io/projected/b4a2a599-ea1c-4571-8dbe-afd67c313647-kube-api-access-cd7sl\") pod \"b4a2a599-ea1c-4571-8dbe-afd67c313647\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.620403 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a2a599-ea1c-4571-8dbe-afd67c313647-logs\") pod \"b4a2a599-ea1c-4571-8dbe-afd67c313647\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.620855 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4a2a599-ea1c-4571-8dbe-afd67c313647-scripts\") pod \"b4a2a599-ea1c-4571-8dbe-afd67c313647\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.620696 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4a2a599-ea1c-4571-8dbe-afd67c313647-logs" (OuterVolumeSpecName: "logs") pod "b4a2a599-ea1c-4571-8dbe-afd67c313647" (UID: "b4a2a599-ea1c-4571-8dbe-afd67c313647"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.621022 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4a2a599-ea1c-4571-8dbe-afd67c313647-config-data\") pod \"b4a2a599-ea1c-4571-8dbe-afd67c313647\" (UID: \"b4a2a599-ea1c-4571-8dbe-afd67c313647\") " Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.621256 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1c2d07f-677d-422e-a815-68ab2298cc39-logs\") pod \"a1c2d07f-677d-422e-a815-68ab2298cc39\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.621356 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a1c2d07f-677d-422e-a815-68ab2298cc39-horizon-secret-key\") pod \"a1c2d07f-677d-422e-a815-68ab2298cc39\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.621454 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjbjf\" (UniqueName: \"kubernetes.io/projected/a1c2d07f-677d-422e-a815-68ab2298cc39-kube-api-access-vjbjf\") pod \"a1c2d07f-677d-422e-a815-68ab2298cc39\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.621552 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1c2d07f-677d-422e-a815-68ab2298cc39-logs" (OuterVolumeSpecName: "logs") pod "a1c2d07f-677d-422e-a815-68ab2298cc39" (UID: "a1c2d07f-677d-422e-a815-68ab2298cc39"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.621566 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1c2d07f-677d-422e-a815-68ab2298cc39-scripts\") pod \"a1c2d07f-677d-422e-a815-68ab2298cc39\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.621725 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1c2d07f-677d-422e-a815-68ab2298cc39-config-data\") pod \"a1c2d07f-677d-422e-a815-68ab2298cc39\" (UID: \"a1c2d07f-677d-422e-a815-68ab2298cc39\") " Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.622726 4719 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a2a599-ea1c-4571-8dbe-afd67c313647-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.622829 4719 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1c2d07f-677d-422e-a815-68ab2298cc39-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.637328 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a2a599-ea1c-4571-8dbe-afd67c313647-kube-api-access-cd7sl" (OuterVolumeSpecName: "kube-api-access-cd7sl") pod "b4a2a599-ea1c-4571-8dbe-afd67c313647" (UID: "b4a2a599-ea1c-4571-8dbe-afd67c313647"). InnerVolumeSpecName "kube-api-access-cd7sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.637392 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a2a599-ea1c-4571-8dbe-afd67c313647-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b4a2a599-ea1c-4571-8dbe-afd67c313647" (UID: "b4a2a599-ea1c-4571-8dbe-afd67c313647"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.638660 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c2d07f-677d-422e-a815-68ab2298cc39-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a1c2d07f-677d-422e-a815-68ab2298cc39" (UID: "a1c2d07f-677d-422e-a815-68ab2298cc39"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.657348 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c2d07f-677d-422e-a815-68ab2298cc39-kube-api-access-vjbjf" (OuterVolumeSpecName: "kube-api-access-vjbjf") pod "a1c2d07f-677d-422e-a815-68ab2298cc39" (UID: "a1c2d07f-677d-422e-a815-68ab2298cc39"). InnerVolumeSpecName "kube-api-access-vjbjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.657464 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4a2a599-ea1c-4571-8dbe-afd67c313647-scripts" (OuterVolumeSpecName: "scripts") pod "b4a2a599-ea1c-4571-8dbe-afd67c313647" (UID: "b4a2a599-ea1c-4571-8dbe-afd67c313647"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.659183 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1c2d07f-677d-422e-a815-68ab2298cc39-config-data" (OuterVolumeSpecName: "config-data") pod "a1c2d07f-677d-422e-a815-68ab2298cc39" (UID: "a1c2d07f-677d-422e-a815-68ab2298cc39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.660943 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1c2d07f-677d-422e-a815-68ab2298cc39-scripts" (OuterVolumeSpecName: "scripts") pod "a1c2d07f-677d-422e-a815-68ab2298cc39" (UID: "a1c2d07f-677d-422e-a815-68ab2298cc39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.688113 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4a2a599-ea1c-4571-8dbe-afd67c313647-config-data" (OuterVolumeSpecName: "config-data") pod "b4a2a599-ea1c-4571-8dbe-afd67c313647" (UID: "b4a2a599-ea1c-4571-8dbe-afd67c313647"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.747704 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4a2a599-ea1c-4571-8dbe-afd67c313647-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.747755 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4a2a599-ea1c-4571-8dbe-afd67c313647-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.747770 4719 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a1c2d07f-677d-422e-a815-68ab2298cc39-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.747784 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjbjf\" (UniqueName: \"kubernetes.io/projected/a1c2d07f-677d-422e-a815-68ab2298cc39-kube-api-access-vjbjf\") on node \"crc\" DevicePath \"\"" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.747796 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1c2d07f-677d-422e-a815-68ab2298cc39-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.747809 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a1c2d07f-677d-422e-a815-68ab2298cc39-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.747819 4719 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b4a2a599-ea1c-4571-8dbe-afd67c313647-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:48:21 crc kubenswrapper[4719]: I1124 09:48:21.747829 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd7sl\" (UniqueName: \"kubernetes.io/projected/b4a2a599-ea1c-4571-8dbe-afd67c313647-kube-api-access-cd7sl\") on node \"crc\" DevicePath \"\"" Nov 24 09:48:22 crc kubenswrapper[4719]: I1124 09:48:22.419548 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-678d5454cc-t98tb" Nov 24 09:48:22 crc kubenswrapper[4719]: I1124 09:48:22.419591 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9f56fdb97-g5shh" Nov 24 09:48:22 crc kubenswrapper[4719]: I1124 09:48:22.467336 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-678d5454cc-t98tb"] Nov 24 09:48:22 crc kubenswrapper[4719]: I1124 09:48:22.477911 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-678d5454cc-t98tb"] Nov 24 09:48:22 crc kubenswrapper[4719]: I1124 09:48:22.487296 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9f56fdb97-g5shh"] Nov 24 09:48:22 crc kubenswrapper[4719]: I1124 09:48:22.493557 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-9f56fdb97-g5shh"] Nov 24 09:48:22 crc kubenswrapper[4719]: I1124 09:48:22.533412 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1c2d07f-677d-422e-a815-68ab2298cc39" path="/var/lib/kubelet/pods/a1c2d07f-677d-422e-a815-68ab2298cc39/volumes" Nov 24 09:48:22 crc kubenswrapper[4719]: I1124 09:48:22.534788 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a2a599-ea1c-4571-8dbe-afd67c313647" path="/var/lib/kubelet/pods/b4a2a599-ea1c-4571-8dbe-afd67c313647/volumes" Nov 24 09:48:26 crc kubenswrapper[4719]: I1124 09:48:26.586798 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:48:26 crc kubenswrapper[4719]: I1124 09:48:26.616423 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:48:28 crc kubenswrapper[4719]: I1124 09:48:28.391562 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:48:28 crc kubenswrapper[4719]: I1124 09:48:28.480702 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5f6b7744d-ql24k" Nov 24 09:48:28 crc kubenswrapper[4719]: I1124 09:48:28.557604 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-856dd4c45d-ncmv5"] Nov 24 09:48:28 crc kubenswrapper[4719]: I1124 09:48:28.557884 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-856dd4c45d-ncmv5" podUID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerName="horizon-log" containerID="cri-o://f52e94cdbb283ee04dc8651e8114525801f47379e23293c15914887791892c4c" gracePeriod=30 Nov 24 09:48:28 crc kubenswrapper[4719]: I1124 09:48:28.558362 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-856dd4c45d-ncmv5" podUID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerName="horizon" containerID="cri-o://32b40757ee333fd3df11398d7e533f2c533c1860a5d953055c40474c63446049" gracePeriod=30 Nov 24 09:48:30 crc kubenswrapper[4719]: I1124 09:48:30.521966 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:48:30 crc kubenswrapper[4719]: E1124 09:48:30.522636 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:48:32 crc kubenswrapper[4719]: I1124 09:48:32.512427 4719 generic.go:334] "Generic (PLEG): container finished" podID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerID="32b40757ee333fd3df11398d7e533f2c533c1860a5d953055c40474c63446049" exitCode=0 Nov 24 09:48:32 crc kubenswrapper[4719]: I1124 09:48:32.512494 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-856dd4c45d-ncmv5" event={"ID":"88a1a623-2d79-4cf4-ab09-544510edc8f5","Type":"ContainerDied","Data":"32b40757ee333fd3df11398d7e533f2c533c1860a5d953055c40474c63446049"} Nov 24 09:48:33 crc kubenswrapper[4719]: I1124 09:48:33.021374 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-856dd4c45d-ncmv5" podUID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Nov 24 09:48:42 crc kubenswrapper[4719]: I1124 09:48:42.521363 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:48:43 crc kubenswrapper[4719]: I1124 09:48:43.021872 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-856dd4c45d-ncmv5" podUID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Nov 24 09:48:43 crc kubenswrapper[4719]: I1124 09:48:43.611847 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"1707be58d034cb6de2f5073861b510fe6003dfd8c59a80ccb65a0b75b54f4094"} Nov 24 09:48:53 crc kubenswrapper[4719]: I1124 09:48:53.021675 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-856dd4c45d-ncmv5" podUID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.243:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.243:8443: connect: connection refused" Nov 24 09:48:53 crc kubenswrapper[4719]: I1124 09:48:53.023392 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:48:58 crc kubenswrapper[4719]: I1124 09:48:58.745473 4719 generic.go:334] "Generic (PLEG): container finished" podID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerID="f52e94cdbb283ee04dc8651e8114525801f47379e23293c15914887791892c4c" exitCode=137 Nov 24 09:48:58 crc kubenswrapper[4719]: I1124 09:48:58.746812 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-856dd4c45d-ncmv5" event={"ID":"88a1a623-2d79-4cf4-ab09-544510edc8f5","Type":"ContainerDied","Data":"f52e94cdbb283ee04dc8651e8114525801f47379e23293c15914887791892c4c"} Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.355656 4719 patch_prober.go:28] interesting pod/route-controller-manager-64fbcb9d69-xl7d5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.355962 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-64fbcb9d69-xl7d5" podUID="5c58f79a-ecfc-4785-ac4e-aad034718d64" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.792178 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-856dd4c45d-ncmv5" event={"ID":"88a1a623-2d79-4cf4-ab09-544510edc8f5","Type":"ContainerDied","Data":"17079676cd9fe0f71ce9c33e35f942076a7b168c6642e0bbff102ee697ed61d0"} Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.792421 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17079676cd9fe0f71ce9c33e35f942076a7b168c6642e0bbff102ee697ed61d0" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.857345 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.928420 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-combined-ca-bundle\") pod \"88a1a623-2d79-4cf4-ab09-544510edc8f5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.929255 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88a1a623-2d79-4cf4-ab09-544510edc8f5-config-data\") pod \"88a1a623-2d79-4cf4-ab09-544510edc8f5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.929353 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a1a623-2d79-4cf4-ab09-544510edc8f5-logs\") pod \"88a1a623-2d79-4cf4-ab09-544510edc8f5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.929389 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxk7d\" (UniqueName: \"kubernetes.io/projected/88a1a623-2d79-4cf4-ab09-544510edc8f5-kube-api-access-xxk7d\") pod \"88a1a623-2d79-4cf4-ab09-544510edc8f5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.929413 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88a1a623-2d79-4cf4-ab09-544510edc8f5-scripts\") pod \"88a1a623-2d79-4cf4-ab09-544510edc8f5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.929487 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-horizon-secret-key\") pod \"88a1a623-2d79-4cf4-ab09-544510edc8f5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.929510 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-horizon-tls-certs\") pod \"88a1a623-2d79-4cf4-ab09-544510edc8f5\" (UID: \"88a1a623-2d79-4cf4-ab09-544510edc8f5\") " Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.931572 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88a1a623-2d79-4cf4-ab09-544510edc8f5-logs" (OuterVolumeSpecName: "logs") pod "88a1a623-2d79-4cf4-ab09-544510edc8f5" (UID: "88a1a623-2d79-4cf4-ab09-544510edc8f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.957364 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "88a1a623-2d79-4cf4-ab09-544510edc8f5" (UID: "88a1a623-2d79-4cf4-ab09-544510edc8f5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.966955 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88a1a623-2d79-4cf4-ab09-544510edc8f5-kube-api-access-xxk7d" (OuterVolumeSpecName: "kube-api-access-xxk7d") pod "88a1a623-2d79-4cf4-ab09-544510edc8f5" (UID: "88a1a623-2d79-4cf4-ab09-544510edc8f5"). InnerVolumeSpecName "kube-api-access-xxk7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:00.992025 4719 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.249645893s: [/var/lib/containers/storage/overlay/d3017d32bfca635a84af8ef9e39a2486d7767133990ed1021c0f99678df19fe4/diff /var/log/pods/openstack_manila-db-sync-nv6wn_1e8128a6-20cf-4abd-a677-fc1d0f61fd23/manila-db-sync/0.log]; will not log again for this container unless duration exceeds 2s Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:01.007572 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88a1a623-2d79-4cf4-ab09-544510edc8f5-scripts" (OuterVolumeSpecName: "scripts") pod "88a1a623-2d79-4cf4-ab09-544510edc8f5" (UID: "88a1a623-2d79-4cf4-ab09-544510edc8f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:01.018568 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88a1a623-2d79-4cf4-ab09-544510edc8f5-config-data" (OuterVolumeSpecName: "config-data") pod "88a1a623-2d79-4cf4-ab09-544510edc8f5" (UID: "88a1a623-2d79-4cf4-ab09-544510edc8f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:01.042062 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88a1a623-2d79-4cf4-ab09-544510edc8f5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:01.042089 4719 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a1a623-2d79-4cf4-ab09-544510edc8f5-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:01.042102 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxk7d\" (UniqueName: \"kubernetes.io/projected/88a1a623-2d79-4cf4-ab09-544510edc8f5-kube-api-access-xxk7d\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:01.042119 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88a1a623-2d79-4cf4-ab09-544510edc8f5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:01.042129 4719 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:01.049414 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "88a1a623-2d79-4cf4-ab09-544510edc8f5" (UID: "88a1a623-2d79-4cf4-ab09-544510edc8f5"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:01.064649 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88a1a623-2d79-4cf4-ab09-544510edc8f5" (UID: "88a1a623-2d79-4cf4-ab09-544510edc8f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:01.144137 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:01.144164 4719 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a1a623-2d79-4cf4-ab09-544510edc8f5-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:01.802780 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-856dd4c45d-ncmv5" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:01.839504 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-856dd4c45d-ncmv5"] Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:01.847092 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-856dd4c45d-ncmv5"] Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:02.533120 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88a1a623-2d79-4cf4-ab09-544510edc8f5" path="/var/lib/kubelet/pods/88a1a623-2d79-4cf4-ab09-544510edc8f5/volumes" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.853615 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jthjk"] Nov 24 09:49:04 crc kubenswrapper[4719]: E1124 09:49:04.854222 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a2a599-ea1c-4571-8dbe-afd67c313647" containerName="horizon" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.854234 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a2a599-ea1c-4571-8dbe-afd67c313647" containerName="horizon" Nov 24 09:49:04 crc kubenswrapper[4719]: E1124 09:49:04.854247 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a2a599-ea1c-4571-8dbe-afd67c313647" containerName="horizon-log" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.854254 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a2a599-ea1c-4571-8dbe-afd67c313647" containerName="horizon-log" Nov 24 09:49:04 crc kubenswrapper[4719]: E1124 09:49:04.854267 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c2d07f-677d-422e-a815-68ab2298cc39" containerName="horizon-log" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.854275 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c2d07f-677d-422e-a815-68ab2298cc39" containerName="horizon-log" Nov 24 09:49:04 crc kubenswrapper[4719]: E1124 09:49:04.854282 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerName="horizon-log" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.854290 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerName="horizon-log" Nov 24 09:49:04 crc kubenswrapper[4719]: E1124 09:49:04.854317 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerName="horizon" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.854324 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerName="horizon" Nov 24 09:49:04 crc kubenswrapper[4719]: E1124 09:49:04.854337 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c2d07f-677d-422e-a815-68ab2298cc39" containerName="horizon" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.854343 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c2d07f-677d-422e-a815-68ab2298cc39" containerName="horizon" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.854494 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerName="horizon-log" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.854503 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a2a599-ea1c-4571-8dbe-afd67c313647" containerName="horizon" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.854521 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a1a623-2d79-4cf4-ab09-544510edc8f5" containerName="horizon" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.854536 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c2d07f-677d-422e-a815-68ab2298cc39" containerName="horizon" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.854546 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c2d07f-677d-422e-a815-68ab2298cc39" containerName="horizon-log" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.854552 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a2a599-ea1c-4571-8dbe-afd67c313647" containerName="horizon-log" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.855764 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.922731 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df808c0f-101c-469f-8754-94e87f612b87-catalog-content\") pod \"redhat-operators-jthjk\" (UID: \"df808c0f-101c-469f-8754-94e87f612b87\") " pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.922869 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8kbk\" (UniqueName: \"kubernetes.io/projected/df808c0f-101c-469f-8754-94e87f612b87-kube-api-access-x8kbk\") pod \"redhat-operators-jthjk\" (UID: \"df808c0f-101c-469f-8754-94e87f612b87\") " pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:04 crc kubenswrapper[4719]: I1124 09:49:04.922941 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df808c0f-101c-469f-8754-94e87f612b87-utilities\") pod \"redhat-operators-jthjk\" (UID: \"df808c0f-101c-469f-8754-94e87f612b87\") " pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:05 crc kubenswrapper[4719]: I1124 09:49:05.025286 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df808c0f-101c-469f-8754-94e87f612b87-catalog-content\") pod \"redhat-operators-jthjk\" (UID: \"df808c0f-101c-469f-8754-94e87f612b87\") " pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:05 crc kubenswrapper[4719]: I1124 09:49:05.025351 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8kbk\" (UniqueName: \"kubernetes.io/projected/df808c0f-101c-469f-8754-94e87f612b87-kube-api-access-x8kbk\") pod \"redhat-operators-jthjk\" (UID: \"df808c0f-101c-469f-8754-94e87f612b87\") " pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:05 crc kubenswrapper[4719]: I1124 09:49:05.025413 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df808c0f-101c-469f-8754-94e87f612b87-utilities\") pod \"redhat-operators-jthjk\" (UID: \"df808c0f-101c-469f-8754-94e87f612b87\") " pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:05 crc kubenswrapper[4719]: I1124 09:49:05.025801 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df808c0f-101c-469f-8754-94e87f612b87-utilities\") pod \"redhat-operators-jthjk\" (UID: \"df808c0f-101c-469f-8754-94e87f612b87\") " pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:05 crc kubenswrapper[4719]: I1124 09:49:05.025844 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df808c0f-101c-469f-8754-94e87f612b87-catalog-content\") pod \"redhat-operators-jthjk\" (UID: \"df808c0f-101c-469f-8754-94e87f612b87\") " pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:05 crc kubenswrapper[4719]: I1124 09:49:05.083302 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8kbk\" (UniqueName: \"kubernetes.io/projected/df808c0f-101c-469f-8754-94e87f612b87-kube-api-access-x8kbk\") pod \"redhat-operators-jthjk\" (UID: \"df808c0f-101c-469f-8754-94e87f612b87\") " pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:05 crc kubenswrapper[4719]: I1124 09:49:05.118779 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jthjk"] Nov 24 09:49:05 crc kubenswrapper[4719]: I1124 09:49:05.175106 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:05 crc kubenswrapper[4719]: I1124 09:49:05.876733 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jthjk"] Nov 24 09:49:06 crc kubenswrapper[4719]: I1124 09:49:06.852505 4719 generic.go:334] "Generic (PLEG): container finished" podID="df808c0f-101c-469f-8754-94e87f612b87" containerID="d13f22051980bb0557c2c18b74a5edefb05e3afcba9bfea69d2a6c10adc0fb34" exitCode=0 Nov 24 09:49:06 crc kubenswrapper[4719]: I1124 09:49:06.852586 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jthjk" event={"ID":"df808c0f-101c-469f-8754-94e87f612b87","Type":"ContainerDied","Data":"d13f22051980bb0557c2c18b74a5edefb05e3afcba9bfea69d2a6c10adc0fb34"} Nov 24 09:49:06 crc kubenswrapper[4719]: I1124 09:49:06.852795 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jthjk" event={"ID":"df808c0f-101c-469f-8754-94e87f612b87","Type":"ContainerStarted","Data":"f15b51670874281f09cef792dbfed717d0085e957fe802049741f5bda9680901"} Nov 24 09:49:06 crc kubenswrapper[4719]: I1124 09:49:06.854398 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:49:09 crc kubenswrapper[4719]: I1124 09:49:09.887285 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jthjk" event={"ID":"df808c0f-101c-469f-8754-94e87f612b87","Type":"ContainerStarted","Data":"b70df69c071fd303b13a7c8c58f1639dbbcf2cc39496da971b2d20d47682b64f"} Nov 24 09:49:23 crc kubenswrapper[4719]: I1124 09:49:23.008569 4719 generic.go:334] "Generic (PLEG): container finished" podID="df808c0f-101c-469f-8754-94e87f612b87" containerID="b70df69c071fd303b13a7c8c58f1639dbbcf2cc39496da971b2d20d47682b64f" exitCode=0 Nov 24 09:49:23 crc kubenswrapper[4719]: I1124 09:49:23.008642 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jthjk" event={"ID":"df808c0f-101c-469f-8754-94e87f612b87","Type":"ContainerDied","Data":"b70df69c071fd303b13a7c8c58f1639dbbcf2cc39496da971b2d20d47682b64f"} Nov 24 09:49:24 crc kubenswrapper[4719]: I1124 09:49:24.039547 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jthjk" event={"ID":"df808c0f-101c-469f-8754-94e87f612b87","Type":"ContainerStarted","Data":"2dfca9af65241792c9d36e992d7b1ec70396c6f5e241440dd4c69afda3e6e3d4"} Nov 24 09:49:24 crc kubenswrapper[4719]: I1124 09:49:24.072693 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jthjk" podStartSLOduration=3.345843151 podStartE2EDuration="20.072661447s" podCreationTimestamp="2025-11-24 09:49:04 +0000 UTC" firstStartedPulling="2025-11-24 09:49:06.854144162 +0000 UTC m=+3323.185417414" lastFinishedPulling="2025-11-24 09:49:23.580962458 +0000 UTC m=+3339.912235710" observedRunningTime="2025-11-24 09:49:24.062583409 +0000 UTC m=+3340.393856671" watchObservedRunningTime="2025-11-24 09:49:24.072661447 +0000 UTC m=+3340.403934689" Nov 24 09:49:25 crc kubenswrapper[4719]: I1124 09:49:25.176406 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:25 crc kubenswrapper[4719]: I1124 09:49:25.176732 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:26 crc kubenswrapper[4719]: I1124 09:49:26.230462 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jthjk" podUID="df808c0f-101c-469f-8754-94e87f612b87" containerName="registry-server" probeResult="failure" output=< Nov 24 09:49:26 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:49:26 crc kubenswrapper[4719]: > Nov 24 09:49:35 crc kubenswrapper[4719]: I1124 09:49:35.222953 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:35 crc kubenswrapper[4719]: I1124 09:49:35.290855 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:36 crc kubenswrapper[4719]: I1124 09:49:36.061352 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jthjk"] Nov 24 09:49:37 crc kubenswrapper[4719]: I1124 09:49:37.152153 4719 generic.go:334] "Generic (PLEG): container finished" podID="1e8128a6-20cf-4abd-a677-fc1d0f61fd23" containerID="306cd7df2dd46e45722c7f6c6ddde4d023189804166a5f0d10db2ed3f923896d" exitCode=0 Nov 24 09:49:37 crc kubenswrapper[4719]: I1124 09:49:37.153388 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jthjk" podUID="df808c0f-101c-469f-8754-94e87f612b87" containerName="registry-server" containerID="cri-o://2dfca9af65241792c9d36e992d7b1ec70396c6f5e241440dd4c69afda3e6e3d4" gracePeriod=2 Nov 24 09:49:37 crc kubenswrapper[4719]: I1124 09:49:37.153493 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-nv6wn" event={"ID":"1e8128a6-20cf-4abd-a677-fc1d0f61fd23","Type":"ContainerDied","Data":"306cd7df2dd46e45722c7f6c6ddde4d023189804166a5f0d10db2ed3f923896d"} Nov 24 09:49:37 crc kubenswrapper[4719]: I1124 09:49:37.875110 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:37 crc kubenswrapper[4719]: I1124 09:49:37.912658 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df808c0f-101c-469f-8754-94e87f612b87-utilities\") pod \"df808c0f-101c-469f-8754-94e87f612b87\" (UID: \"df808c0f-101c-469f-8754-94e87f612b87\") " Nov 24 09:49:37 crc kubenswrapper[4719]: I1124 09:49:37.912757 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8kbk\" (UniqueName: \"kubernetes.io/projected/df808c0f-101c-469f-8754-94e87f612b87-kube-api-access-x8kbk\") pod \"df808c0f-101c-469f-8754-94e87f612b87\" (UID: \"df808c0f-101c-469f-8754-94e87f612b87\") " Nov 24 09:49:37 crc kubenswrapper[4719]: I1124 09:49:37.912888 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df808c0f-101c-469f-8754-94e87f612b87-catalog-content\") pod \"df808c0f-101c-469f-8754-94e87f612b87\" (UID: \"df808c0f-101c-469f-8754-94e87f612b87\") " Nov 24 09:49:37 crc kubenswrapper[4719]: I1124 09:49:37.919232 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df808c0f-101c-469f-8754-94e87f612b87-utilities" (OuterVolumeSpecName: "utilities") pod "df808c0f-101c-469f-8754-94e87f612b87" (UID: "df808c0f-101c-469f-8754-94e87f612b87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:49:37 crc kubenswrapper[4719]: I1124 09:49:37.927243 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df808c0f-101c-469f-8754-94e87f612b87-kube-api-access-x8kbk" (OuterVolumeSpecName: "kube-api-access-x8kbk") pod "df808c0f-101c-469f-8754-94e87f612b87" (UID: "df808c0f-101c-469f-8754-94e87f612b87"). InnerVolumeSpecName "kube-api-access-x8kbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.015326 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df808c0f-101c-469f-8754-94e87f612b87-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.015369 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8kbk\" (UniqueName: \"kubernetes.io/projected/df808c0f-101c-469f-8754-94e87f612b87-kube-api-access-x8kbk\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.031393 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df808c0f-101c-469f-8754-94e87f612b87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df808c0f-101c-469f-8754-94e87f612b87" (UID: "df808c0f-101c-469f-8754-94e87f612b87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.116953 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df808c0f-101c-469f-8754-94e87f612b87-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.164988 4719 generic.go:334] "Generic (PLEG): container finished" podID="df808c0f-101c-469f-8754-94e87f612b87" containerID="2dfca9af65241792c9d36e992d7b1ec70396c6f5e241440dd4c69afda3e6e3d4" exitCode=0 Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.165067 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jthjk" event={"ID":"df808c0f-101c-469f-8754-94e87f612b87","Type":"ContainerDied","Data":"2dfca9af65241792c9d36e992d7b1ec70396c6f5e241440dd4c69afda3e6e3d4"} Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.165138 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jthjk" event={"ID":"df808c0f-101c-469f-8754-94e87f612b87","Type":"ContainerDied","Data":"f15b51670874281f09cef792dbfed717d0085e957fe802049741f5bda9680901"} Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.165161 4719 scope.go:117] "RemoveContainer" containerID="2dfca9af65241792c9d36e992d7b1ec70396c6f5e241440dd4c69afda3e6e3d4" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.165091 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jthjk" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.216812 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jthjk"] Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.219426 4719 scope.go:117] "RemoveContainer" containerID="b70df69c071fd303b13a7c8c58f1639dbbcf2cc39496da971b2d20d47682b64f" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.230167 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jthjk"] Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.256512 4719 scope.go:117] "RemoveContainer" containerID="d13f22051980bb0557c2c18b74a5edefb05e3afcba9bfea69d2a6c10adc0fb34" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.296857 4719 scope.go:117] "RemoveContainer" containerID="2dfca9af65241792c9d36e992d7b1ec70396c6f5e241440dd4c69afda3e6e3d4" Nov 24 09:49:38 crc kubenswrapper[4719]: E1124 09:49:38.300891 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dfca9af65241792c9d36e992d7b1ec70396c6f5e241440dd4c69afda3e6e3d4\": container with ID starting with 2dfca9af65241792c9d36e992d7b1ec70396c6f5e241440dd4c69afda3e6e3d4 not found: ID does not exist" containerID="2dfca9af65241792c9d36e992d7b1ec70396c6f5e241440dd4c69afda3e6e3d4" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.300939 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dfca9af65241792c9d36e992d7b1ec70396c6f5e241440dd4c69afda3e6e3d4"} err="failed to get container status \"2dfca9af65241792c9d36e992d7b1ec70396c6f5e241440dd4c69afda3e6e3d4\": rpc error: code = NotFound desc = could not find container \"2dfca9af65241792c9d36e992d7b1ec70396c6f5e241440dd4c69afda3e6e3d4\": container with ID starting with 2dfca9af65241792c9d36e992d7b1ec70396c6f5e241440dd4c69afda3e6e3d4 not found: ID does not exist" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.300965 4719 scope.go:117] "RemoveContainer" containerID="b70df69c071fd303b13a7c8c58f1639dbbcf2cc39496da971b2d20d47682b64f" Nov 24 09:49:38 crc kubenswrapper[4719]: E1124 09:49:38.301532 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b70df69c071fd303b13a7c8c58f1639dbbcf2cc39496da971b2d20d47682b64f\": container with ID starting with b70df69c071fd303b13a7c8c58f1639dbbcf2cc39496da971b2d20d47682b64f not found: ID does not exist" containerID="b70df69c071fd303b13a7c8c58f1639dbbcf2cc39496da971b2d20d47682b64f" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.301570 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b70df69c071fd303b13a7c8c58f1639dbbcf2cc39496da971b2d20d47682b64f"} err="failed to get container status \"b70df69c071fd303b13a7c8c58f1639dbbcf2cc39496da971b2d20d47682b64f\": rpc error: code = NotFound desc = could not find container \"b70df69c071fd303b13a7c8c58f1639dbbcf2cc39496da971b2d20d47682b64f\": container with ID starting with b70df69c071fd303b13a7c8c58f1639dbbcf2cc39496da971b2d20d47682b64f not found: ID does not exist" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.301598 4719 scope.go:117] "RemoveContainer" containerID="d13f22051980bb0557c2c18b74a5edefb05e3afcba9bfea69d2a6c10adc0fb34" Nov 24 09:49:38 crc kubenswrapper[4719]: E1124 09:49:38.302057 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d13f22051980bb0557c2c18b74a5edefb05e3afcba9bfea69d2a6c10adc0fb34\": container with ID starting with d13f22051980bb0557c2c18b74a5edefb05e3afcba9bfea69d2a6c10adc0fb34 not found: ID does not exist" containerID="d13f22051980bb0557c2c18b74a5edefb05e3afcba9bfea69d2a6c10adc0fb34" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.302088 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d13f22051980bb0557c2c18b74a5edefb05e3afcba9bfea69d2a6c10adc0fb34"} err="failed to get container status \"d13f22051980bb0557c2c18b74a5edefb05e3afcba9bfea69d2a6c10adc0fb34\": rpc error: code = NotFound desc = could not find container \"d13f22051980bb0557c2c18b74a5edefb05e3afcba9bfea69d2a6c10adc0fb34\": container with ID starting with d13f22051980bb0557c2c18b74a5edefb05e3afcba9bfea69d2a6c10adc0fb34 not found: ID does not exist" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.531958 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df808c0f-101c-469f-8754-94e87f612b87" path="/var/lib/kubelet/pods/df808c0f-101c-469f-8754-94e87f612b87/volumes" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.866747 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-nv6wn" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.932491 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-combined-ca-bundle\") pod \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.932587 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-job-config-data\") pod \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.932603 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-config-data\") pod \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.932706 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7psx\" (UniqueName: \"kubernetes.io/projected/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-kube-api-access-g7psx\") pod \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\" (UID: \"1e8128a6-20cf-4abd-a677-fc1d0f61fd23\") " Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.938792 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-kube-api-access-g7psx" (OuterVolumeSpecName: "kube-api-access-g7psx") pod "1e8128a6-20cf-4abd-a677-fc1d0f61fd23" (UID: "1e8128a6-20cf-4abd-a677-fc1d0f61fd23"). InnerVolumeSpecName "kube-api-access-g7psx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.945922 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "1e8128a6-20cf-4abd-a677-fc1d0f61fd23" (UID: "1e8128a6-20cf-4abd-a677-fc1d0f61fd23"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.952099 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-config-data" (OuterVolumeSpecName: "config-data") pod "1e8128a6-20cf-4abd-a677-fc1d0f61fd23" (UID: "1e8128a6-20cf-4abd-a677-fc1d0f61fd23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:38 crc kubenswrapper[4719]: I1124 09:49:38.989883 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e8128a6-20cf-4abd-a677-fc1d0f61fd23" (UID: "1e8128a6-20cf-4abd-a677-fc1d0f61fd23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.035470 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.035508 4719 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-job-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.035523 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.035535 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7psx\" (UniqueName: \"kubernetes.io/projected/1e8128a6-20cf-4abd-a677-fc1d0f61fd23-kube-api-access-g7psx\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.175079 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-nv6wn" event={"ID":"1e8128a6-20cf-4abd-a677-fc1d0f61fd23","Type":"ContainerDied","Data":"204cc4dc0b9f7abe16127b527d594dceda4f977aa7dd25185dd4c46f65d7493f"} Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.175121 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="204cc4dc0b9f7abe16127b527d594dceda4f977aa7dd25185dd4c46f65d7493f" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.175154 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-nv6wn" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.523070 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 09:49:39 crc kubenswrapper[4719]: E1124 09:49:39.523763 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df808c0f-101c-469f-8754-94e87f612b87" containerName="registry-server" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.523782 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="df808c0f-101c-469f-8754-94e87f612b87" containerName="registry-server" Nov 24 09:49:39 crc kubenswrapper[4719]: E1124 09:49:39.523810 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8128a6-20cf-4abd-a677-fc1d0f61fd23" containerName="manila-db-sync" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.523818 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8128a6-20cf-4abd-a677-fc1d0f61fd23" containerName="manila-db-sync" Nov 24 09:49:39 crc kubenswrapper[4719]: E1124 09:49:39.523831 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df808c0f-101c-469f-8754-94e87f612b87" containerName="extract-utilities" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.523839 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="df808c0f-101c-469f-8754-94e87f612b87" containerName="extract-utilities" Nov 24 09:49:39 crc kubenswrapper[4719]: E1124 09:49:39.523861 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df808c0f-101c-469f-8754-94e87f612b87" containerName="extract-content" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.523868 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="df808c0f-101c-469f-8754-94e87f612b87" containerName="extract-content" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.524097 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e8128a6-20cf-4abd-a677-fc1d0f61fd23" containerName="manila-db-sync" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.524123 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="df808c0f-101c-469f-8754-94e87f612b87" containerName="registry-server" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.525225 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.534724 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.536450 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.544891 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.545060 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-7q75z" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.545170 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.545310 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.551484 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.561259 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.567025 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.653916 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.653972 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w2wc\" (UniqueName: \"kubernetes.io/projected/63322e98-36aa-491e-9ba6-ec47b452f3aa-kube-api-access-4w2wc\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.654000 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.654065 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4jvr\" (UniqueName: \"kubernetes.io/projected/55001879-8601-4a7a-b3df-4b847f9b72e4-kube-api-access-z4jvr\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.654087 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55001879-8601-4a7a-b3df-4b847f9b72e4-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.654109 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-scripts\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.654130 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/55001879-8601-4a7a-b3df-4b847f9b72e4-ceph\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.654149 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.654174 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.654197 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-config-data\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.654220 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/55001879-8601-4a7a-b3df-4b847f9b72e4-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.654245 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-scripts\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.654269 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63322e98-36aa-491e-9ba6-ec47b452f3aa-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.654295 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-config-data\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.755543 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4jvr\" (UniqueName: \"kubernetes.io/projected/55001879-8601-4a7a-b3df-4b847f9b72e4-kube-api-access-z4jvr\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.755590 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55001879-8601-4a7a-b3df-4b847f9b72e4-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.755619 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-scripts\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.755642 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/55001879-8601-4a7a-b3df-4b847f9b72e4-ceph\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.755665 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.755691 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.755714 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-config-data\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.755742 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/55001879-8601-4a7a-b3df-4b847f9b72e4-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.755767 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-scripts\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.755792 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63322e98-36aa-491e-9ba6-ec47b452f3aa-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.755820 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-config-data\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.755849 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.755875 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w2wc\" (UniqueName: \"kubernetes.io/projected/63322e98-36aa-491e-9ba6-ec47b452f3aa-kube-api-access-4w2wc\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.755898 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.756769 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63322e98-36aa-491e-9ba6-ec47b452f3aa-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.758061 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55001879-8601-4a7a-b3df-4b847f9b72e4-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.760294 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/55001879-8601-4a7a-b3df-4b847f9b72e4-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.763914 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-config-data\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.770374 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.770878 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-scripts\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.773576 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-scripts\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.775083 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.775494 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/55001879-8601-4a7a-b3df-4b847f9b72e4-ceph\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.776643 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.780831 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-config-data\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.794073 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4jvr\" (UniqueName: \"kubernetes.io/projected/55001879-8601-4a7a-b3df-4b847f9b72e4-kube-api-access-z4jvr\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.794774 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.806691 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c846ff5b9-x2fxq"] Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.811099 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w2wc\" (UniqueName: \"kubernetes.io/projected/63322e98-36aa-491e-9ba6-ec47b452f3aa-kube-api-access-4w2wc\") pod \"manila-scheduler-0\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.816373 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.820718 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c846ff5b9-x2fxq"] Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.857186 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.873887 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-dns-svc\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.873985 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-config\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.874029 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.874092 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zbpm\" (UniqueName: \"kubernetes.io/projected/643db723-7fbb-4c9e-a815-fcfbc4eab02c-kube-api-access-6zbpm\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.874137 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.874217 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.881543 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.944115 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.945690 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.950514 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.978186 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-dns-svc\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.978270 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-config\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.978311 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.978351 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zbpm\" (UniqueName: \"kubernetes.io/projected/643db723-7fbb-4c9e-a815-fcfbc4eab02c-kube-api-access-6zbpm\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.978409 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.978469 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.979379 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.979844 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-dns-svc\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.980217 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.982282 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-config\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.983110 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:39 crc kubenswrapper[4719]: I1124 09:49:39.983270 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/643db723-7fbb-4c9e-a815-fcfbc4eab02c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.010692 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zbpm\" (UniqueName: \"kubernetes.io/projected/643db723-7fbb-4c9e-a815-fcfbc4eab02c-kube-api-access-6zbpm\") pod \"dnsmasq-dns-5c846ff5b9-x2fxq\" (UID: \"643db723-7fbb-4c9e-a815-fcfbc4eab02c\") " pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.080888 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-config-data-custom\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.080996 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-scripts\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.081035 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bdef5e-f31d-4271-98df-9f6e02166ee4-logs\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.081111 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.081144 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5nvs\" (UniqueName: \"kubernetes.io/projected/30bdef5e-f31d-4271-98df-9f6e02166ee4-kube-api-access-k5nvs\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.081166 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bdef5e-f31d-4271-98df-9f6e02166ee4-etc-machine-id\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.081183 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-config-data\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.182382 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5nvs\" (UniqueName: \"kubernetes.io/projected/30bdef5e-f31d-4271-98df-9f6e02166ee4-kube-api-access-k5nvs\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.182725 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bdef5e-f31d-4271-98df-9f6e02166ee4-etc-machine-id\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.182779 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-config-data\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.182828 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bdef5e-f31d-4271-98df-9f6e02166ee4-etc-machine-id\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.182897 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-config-data-custom\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.182979 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-scripts\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.183072 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bdef5e-f31d-4271-98df-9f6e02166ee4-logs\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.183156 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.187347 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bdef5e-f31d-4271-98df-9f6e02166ee4-logs\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.213070 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-scripts\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.215087 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.215231 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-config-data\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.216394 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-config-data-custom\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.228175 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.236672 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5nvs\" (UniqueName: \"kubernetes.io/projected/30bdef5e-f31d-4271-98df-9f6e02166ee4-kube-api-access-k5nvs\") pod \"manila-api-0\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.387648 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.837782 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 09:49:40 crc kubenswrapper[4719]: I1124 09:49:40.904969 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 09:49:41 crc kubenswrapper[4719]: I1124 09:49:41.029972 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c846ff5b9-x2fxq"] Nov 24 09:49:41 crc kubenswrapper[4719]: W1124 09:49:41.087723 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod643db723_7fbb_4c9e_a815_fcfbc4eab02c.slice/crio-6c996c46bf91256c1daeb5e6ea4b60ad9117c42ba6e70867b86b2a4f98f3be90 WatchSource:0}: Error finding container 6c996c46bf91256c1daeb5e6ea4b60ad9117c42ba6e70867b86b2a4f98f3be90: Status 404 returned error can't find the container with id 6c996c46bf91256c1daeb5e6ea4b60ad9117c42ba6e70867b86b2a4f98f3be90 Nov 24 09:49:41 crc kubenswrapper[4719]: I1124 09:49:41.203605 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" event={"ID":"643db723-7fbb-4c9e-a815-fcfbc4eab02c","Type":"ContainerStarted","Data":"6c996c46bf91256c1daeb5e6ea4b60ad9117c42ba6e70867b86b2a4f98f3be90"} Nov 24 09:49:41 crc kubenswrapper[4719]: I1124 09:49:41.210702 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"63322e98-36aa-491e-9ba6-ec47b452f3aa","Type":"ContainerStarted","Data":"becaf11134cfc67402d94f29e5aabf37667e7837e9e3eab6dd487965fb462ccc"} Nov 24 09:49:41 crc kubenswrapper[4719]: I1124 09:49:41.216302 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"55001879-8601-4a7a-b3df-4b847f9b72e4","Type":"ContainerStarted","Data":"3b3b019b504e268caa007bb06adf76c198ae36c5d120d9bb6898c9e53d248ce4"} Nov 24 09:49:41 crc kubenswrapper[4719]: I1124 09:49:41.224292 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 24 09:49:42 crc kubenswrapper[4719]: I1124 09:49:42.258648 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"30bdef5e-f31d-4271-98df-9f6e02166ee4","Type":"ContainerStarted","Data":"20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38"} Nov 24 09:49:42 crc kubenswrapper[4719]: I1124 09:49:42.259142 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"30bdef5e-f31d-4271-98df-9f6e02166ee4","Type":"ContainerStarted","Data":"56c081f03b192ca1c232f14154000ba4867e14feedb024788655b63a34d9ebaa"} Nov 24 09:49:42 crc kubenswrapper[4719]: I1124 09:49:42.263286 4719 generic.go:334] "Generic (PLEG): container finished" podID="643db723-7fbb-4c9e-a815-fcfbc4eab02c" containerID="de80c3ab7396947fc568a109874221d530bbe1866aed51d88b209464441ec970" exitCode=0 Nov 24 09:49:42 crc kubenswrapper[4719]: I1124 09:49:42.263327 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" event={"ID":"643db723-7fbb-4c9e-a815-fcfbc4eab02c","Type":"ContainerDied","Data":"de80c3ab7396947fc568a109874221d530bbe1866aed51d88b209464441ec970"} Nov 24 09:49:42 crc kubenswrapper[4719]: I1124 09:49:42.519956 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 24 09:49:43 crc kubenswrapper[4719]: I1124 09:49:43.283651 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"63322e98-36aa-491e-9ba6-ec47b452f3aa","Type":"ContainerStarted","Data":"73ed274a892e64ab5b517b6d026a6a25956bab591773753f15bc96828b53db60"} Nov 24 09:49:43 crc kubenswrapper[4719]: I1124 09:49:43.286354 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="30bdef5e-f31d-4271-98df-9f6e02166ee4" containerName="manila-api-log" containerID="cri-o://20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38" gracePeriod=30 Nov 24 09:49:43 crc kubenswrapper[4719]: I1124 09:49:43.286601 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"30bdef5e-f31d-4271-98df-9f6e02166ee4","Type":"ContainerStarted","Data":"6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9"} Nov 24 09:49:43 crc kubenswrapper[4719]: I1124 09:49:43.286651 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 24 09:49:43 crc kubenswrapper[4719]: I1124 09:49:43.287233 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="30bdef5e-f31d-4271-98df-9f6e02166ee4" containerName="manila-api" containerID="cri-o://6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9" gracePeriod=30 Nov 24 09:49:43 crc kubenswrapper[4719]: I1124 09:49:43.304870 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" event={"ID":"643db723-7fbb-4c9e-a815-fcfbc4eab02c","Type":"ContainerStarted","Data":"88724b603d01aebdd2d8c10aa3d76b30cf86f379614fd1717ff676a156b2b638"} Nov 24 09:49:43 crc kubenswrapper[4719]: I1124 09:49:43.306125 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:43 crc kubenswrapper[4719]: I1124 09:49:43.320293 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.320275418 podStartE2EDuration="4.320275418s" podCreationTimestamp="2025-11-24 09:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:49:43.31370218 +0000 UTC m=+3359.644975442" watchObservedRunningTime="2025-11-24 09:49:43.320275418 +0000 UTC m=+3359.651548670" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.037329 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.057762 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" podStartSLOduration=5.057738508 podStartE2EDuration="5.057738508s" podCreationTimestamp="2025-11-24 09:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:49:43.346469866 +0000 UTC m=+3359.677743128" watchObservedRunningTime="2025-11-24 09:49:44.057738508 +0000 UTC m=+3360.389011760" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.104324 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-config-data-custom\") pod \"30bdef5e-f31d-4271-98df-9f6e02166ee4\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.104453 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-config-data\") pod \"30bdef5e-f31d-4271-98df-9f6e02166ee4\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.104494 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bdef5e-f31d-4271-98df-9f6e02166ee4-logs\") pod \"30bdef5e-f31d-4271-98df-9f6e02166ee4\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.104567 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bdef5e-f31d-4271-98df-9f6e02166ee4-etc-machine-id\") pod \"30bdef5e-f31d-4271-98df-9f6e02166ee4\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.104583 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-combined-ca-bundle\") pod \"30bdef5e-f31d-4271-98df-9f6e02166ee4\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.104602 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5nvs\" (UniqueName: \"kubernetes.io/projected/30bdef5e-f31d-4271-98df-9f6e02166ee4-kube-api-access-k5nvs\") pod \"30bdef5e-f31d-4271-98df-9f6e02166ee4\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.104653 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-scripts\") pod \"30bdef5e-f31d-4271-98df-9f6e02166ee4\" (UID: \"30bdef5e-f31d-4271-98df-9f6e02166ee4\") " Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.106008 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30bdef5e-f31d-4271-98df-9f6e02166ee4-logs" (OuterVolumeSpecName: "logs") pod "30bdef5e-f31d-4271-98df-9f6e02166ee4" (UID: "30bdef5e-f31d-4271-98df-9f6e02166ee4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.106207 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30bdef5e-f31d-4271-98df-9f6e02166ee4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "30bdef5e-f31d-4271-98df-9f6e02166ee4" (UID: "30bdef5e-f31d-4271-98df-9f6e02166ee4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.113557 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-scripts" (OuterVolumeSpecName: "scripts") pod "30bdef5e-f31d-4271-98df-9f6e02166ee4" (UID: "30bdef5e-f31d-4271-98df-9f6e02166ee4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.122085 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30bdef5e-f31d-4271-98df-9f6e02166ee4-kube-api-access-k5nvs" (OuterVolumeSpecName: "kube-api-access-k5nvs") pod "30bdef5e-f31d-4271-98df-9f6e02166ee4" (UID: "30bdef5e-f31d-4271-98df-9f6e02166ee4"). InnerVolumeSpecName "kube-api-access-k5nvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.137169 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "30bdef5e-f31d-4271-98df-9f6e02166ee4" (UID: "30bdef5e-f31d-4271-98df-9f6e02166ee4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.206719 4719 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30bdef5e-f31d-4271-98df-9f6e02166ee4-logs\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.206756 4719 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/30bdef5e-f31d-4271-98df-9f6e02166ee4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.206770 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5nvs\" (UniqueName: \"kubernetes.io/projected/30bdef5e-f31d-4271-98df-9f6e02166ee4-kube-api-access-k5nvs\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.206782 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.206793 4719 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.210199 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30bdef5e-f31d-4271-98df-9f6e02166ee4" (UID: "30bdef5e-f31d-4271-98df-9f6e02166ee4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.255647 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-config-data" (OuterVolumeSpecName: "config-data") pod "30bdef5e-f31d-4271-98df-9f6e02166ee4" (UID: "30bdef5e-f31d-4271-98df-9f6e02166ee4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.308000 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.308027 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bdef5e-f31d-4271-98df-9f6e02166ee4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.315851 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"63322e98-36aa-491e-9ba6-ec47b452f3aa","Type":"ContainerStarted","Data":"3f0e5189567e7fb0c7567813b3f096b5d299316c30cd250f9a7c0ed70440a7db"} Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.324875 4719 generic.go:334] "Generic (PLEG): container finished" podID="30bdef5e-f31d-4271-98df-9f6e02166ee4" containerID="6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9" exitCode=143 Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.325074 4719 generic.go:334] "Generic (PLEG): container finished" podID="30bdef5e-f31d-4271-98df-9f6e02166ee4" containerID="20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38" exitCode=143 Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.324928 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.324948 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"30bdef5e-f31d-4271-98df-9f6e02166ee4","Type":"ContainerDied","Data":"6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9"} Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.325465 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"30bdef5e-f31d-4271-98df-9f6e02166ee4","Type":"ContainerDied","Data":"20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38"} Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.325501 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"30bdef5e-f31d-4271-98df-9f6e02166ee4","Type":"ContainerDied","Data":"56c081f03b192ca1c232f14154000ba4867e14feedb024788655b63a34d9ebaa"} Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.325517 4719 scope.go:117] "RemoveContainer" containerID="6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.338616 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.131743341 podStartE2EDuration="5.338598973s" podCreationTimestamp="2025-11-24 09:49:39 +0000 UTC" firstStartedPulling="2025-11-24 09:49:40.954208166 +0000 UTC m=+3357.285481418" lastFinishedPulling="2025-11-24 09:49:42.161063798 +0000 UTC m=+3358.492337050" observedRunningTime="2025-11-24 09:49:44.330226974 +0000 UTC m=+3360.661500226" watchObservedRunningTime="2025-11-24 09:49:44.338598973 +0000 UTC m=+3360.669872225" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.359362 4719 scope.go:117] "RemoveContainer" containerID="20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.378420 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.398525 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.408903 4719 scope.go:117] "RemoveContainer" containerID="6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9" Nov 24 09:49:44 crc kubenswrapper[4719]: E1124 09:49:44.411891 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9\": container with ID starting with 6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9 not found: ID does not exist" containerID="6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.411930 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9"} err="failed to get container status \"6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9\": rpc error: code = NotFound desc = could not find container \"6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9\": container with ID starting with 6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9 not found: ID does not exist" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.411953 4719 scope.go:117] "RemoveContainer" containerID="20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38" Nov 24 09:49:44 crc kubenswrapper[4719]: E1124 09:49:44.413499 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38\": container with ID starting with 20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38 not found: ID does not exist" containerID="20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.413532 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38"} err="failed to get container status \"20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38\": rpc error: code = NotFound desc = could not find container \"20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38\": container with ID starting with 20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38 not found: ID does not exist" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.413551 4719 scope.go:117] "RemoveContainer" containerID="6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.417577 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9"} err="failed to get container status \"6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9\": rpc error: code = NotFound desc = could not find container \"6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9\": container with ID starting with 6155e973c33710def535096fe8a7bff242878a95d2c0ff76c2727a1afff00bc9 not found: ID does not exist" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.417612 4719 scope.go:117] "RemoveContainer" containerID="20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.418060 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38"} err="failed to get container status \"20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38\": rpc error: code = NotFound desc = could not find container \"20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38\": container with ID starting with 20099227f2b917d1581a7337f6a0e75ede55f34e4a7e8791f75d186be9b78e38 not found: ID does not exist" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.421514 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 24 09:49:44 crc kubenswrapper[4719]: E1124 09:49:44.421886 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bdef5e-f31d-4271-98df-9f6e02166ee4" containerName="manila-api" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.421903 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bdef5e-f31d-4271-98df-9f6e02166ee4" containerName="manila-api" Nov 24 09:49:44 crc kubenswrapper[4719]: E1124 09:49:44.421928 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bdef5e-f31d-4271-98df-9f6e02166ee4" containerName="manila-api-log" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.421934 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bdef5e-f31d-4271-98df-9f6e02166ee4" containerName="manila-api-log" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.422127 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bdef5e-f31d-4271-98df-9f6e02166ee4" containerName="manila-api" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.422151 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bdef5e-f31d-4271-98df-9f6e02166ee4" containerName="manila-api-log" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.423132 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.429492 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.429610 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.429704 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.436554 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.511573 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7db9d547-856d-42d1-a2b5-bdc02f69d938-etc-machine-id\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.511668 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-internal-tls-certs\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.511717 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-config-data-custom\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.511747 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-public-tls-certs\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.511763 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-config-data\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.511910 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7db9d547-856d-42d1-a2b5-bdc02f69d938-logs\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.512010 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-scripts\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.512122 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2xsr\" (UniqueName: \"kubernetes.io/projected/7db9d547-856d-42d1-a2b5-bdc02f69d938-kube-api-access-l2xsr\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.512384 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.544220 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30bdef5e-f31d-4271-98df-9f6e02166ee4" path="/var/lib/kubelet/pods/30bdef5e-f31d-4271-98df-9f6e02166ee4/volumes" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.616124 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.616246 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7db9d547-856d-42d1-a2b5-bdc02f69d938-etc-machine-id\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.616276 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-internal-tls-certs\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.616303 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-config-data-custom\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.616329 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-public-tls-certs\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.616344 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-config-data\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.616386 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7db9d547-856d-42d1-a2b5-bdc02f69d938-logs\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.616406 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-scripts\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.616429 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2xsr\" (UniqueName: \"kubernetes.io/projected/7db9d547-856d-42d1-a2b5-bdc02f69d938-kube-api-access-l2xsr\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.619794 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7db9d547-856d-42d1-a2b5-bdc02f69d938-logs\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.619956 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7db9d547-856d-42d1-a2b5-bdc02f69d938-etc-machine-id\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.623769 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.624242 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.626534 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.626807 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.627078 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-scripts\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.631937 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-config-data\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.638608 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-public-tls-certs\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.638697 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-config-data-custom\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.639961 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db9d547-856d-42d1-a2b5-bdc02f69d938-internal-tls-certs\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.640499 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2xsr\" (UniqueName: \"kubernetes.io/projected/7db9d547-856d-42d1-a2b5-bdc02f69d938-kube-api-access-l2xsr\") pod \"manila-api-0\" (UID: \"7db9d547-856d-42d1-a2b5-bdc02f69d938\") " pod="openstack/manila-api-0" Nov 24 09:49:44 crc kubenswrapper[4719]: I1124 09:49:44.747560 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 24 09:49:45 crc kubenswrapper[4719]: I1124 09:49:45.500678 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 24 09:49:46 crc kubenswrapper[4719]: I1124 09:49:46.365806 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7db9d547-856d-42d1-a2b5-bdc02f69d938","Type":"ContainerStarted","Data":"5b008879c7cbd6b7edd305b6bdf7a6ff749f6c8683e260af1dbd26bc73d1478c"} Nov 24 09:49:46 crc kubenswrapper[4719]: I1124 09:49:46.366052 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7db9d547-856d-42d1-a2b5-bdc02f69d938","Type":"ContainerStarted","Data":"08d838f51fb7227a1f1986a816d6704f81b7d4a8edd37d10f0090503a74110e6"} Nov 24 09:49:47 crc kubenswrapper[4719]: I1124 09:49:47.400650 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7db9d547-856d-42d1-a2b5-bdc02f69d938","Type":"ContainerStarted","Data":"78e7d7cbe21a656aa24ad843a04b8604470d412399cd340b8142bc2607842be5"} Nov 24 09:49:47 crc kubenswrapper[4719]: I1124 09:49:47.401228 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 24 09:49:49 crc kubenswrapper[4719]: I1124 09:49:49.859318 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 24 09:49:50 crc kubenswrapper[4719]: I1124 09:49:50.230224 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c846ff5b9-x2fxq" Nov 24 09:49:50 crc kubenswrapper[4719]: I1124 09:49:50.254223 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=6.254200621 podStartE2EDuration="6.254200621s" podCreationTimestamp="2025-11-24 09:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:49:47.437187344 +0000 UTC m=+3363.768460616" watchObservedRunningTime="2025-11-24 09:49:50.254200621 +0000 UTC m=+3366.585473883" Nov 24 09:49:50 crc kubenswrapper[4719]: I1124 09:49:50.318761 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-w52mp"] Nov 24 09:49:50 crc kubenswrapper[4719]: I1124 09:49:50.429280 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-667ff9c869-w52mp" podUID="b6c26c2d-008f-4cc0-99db-80a8e21c3537" containerName="dnsmasq-dns" containerID="cri-o://002d69b3127f9d9e7e69a921e37d6204154e51a4f0968b4f125e99db349e63b0" gracePeriod=10 Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.370249 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.452674 4719 generic.go:334] "Generic (PLEG): container finished" podID="b6c26c2d-008f-4cc0-99db-80a8e21c3537" containerID="002d69b3127f9d9e7e69a921e37d6204154e51a4f0968b4f125e99db349e63b0" exitCode=0 Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.452714 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-w52mp" event={"ID":"b6c26c2d-008f-4cc0-99db-80a8e21c3537","Type":"ContainerDied","Data":"002d69b3127f9d9e7e69a921e37d6204154e51a4f0968b4f125e99db349e63b0"} Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.452739 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-667ff9c869-w52mp" event={"ID":"b6c26c2d-008f-4cc0-99db-80a8e21c3537","Type":"ContainerDied","Data":"a992d898d40d1db99888a165dee7a4475738009285c04be069422dcd2d9971c3"} Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.452755 4719 scope.go:117] "RemoveContainer" containerID="002d69b3127f9d9e7e69a921e37d6204154e51a4f0968b4f125e99db349e63b0" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.452878 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-667ff9c869-w52mp" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.510435 4719 scope.go:117] "RemoveContainer" containerID="f4b7e08190f6fa8eb96de5922658ef441d2ae5f43438926bb2ce222e1923dcff" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.536849 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-dns-svc\") pod \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.537221 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-ovsdbserver-sb\") pod \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.537291 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfvxh\" (UniqueName: \"kubernetes.io/projected/b6c26c2d-008f-4cc0-99db-80a8e21c3537-kube-api-access-sfvxh\") pod \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.537377 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-ovsdbserver-nb\") pod \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.537404 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-config\") pod \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.537518 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-openstack-edpm-ipam\") pod \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\" (UID: \"b6c26c2d-008f-4cc0-99db-80a8e21c3537\") " Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.548546 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c26c2d-008f-4cc0-99db-80a8e21c3537-kube-api-access-sfvxh" (OuterVolumeSpecName: "kube-api-access-sfvxh") pod "b6c26c2d-008f-4cc0-99db-80a8e21c3537" (UID: "b6c26c2d-008f-4cc0-99db-80a8e21c3537"). InnerVolumeSpecName "kube-api-access-sfvxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.553593 4719 scope.go:117] "RemoveContainer" containerID="002d69b3127f9d9e7e69a921e37d6204154e51a4f0968b4f125e99db349e63b0" Nov 24 09:49:51 crc kubenswrapper[4719]: E1124 09:49:51.554002 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"002d69b3127f9d9e7e69a921e37d6204154e51a4f0968b4f125e99db349e63b0\": container with ID starting with 002d69b3127f9d9e7e69a921e37d6204154e51a4f0968b4f125e99db349e63b0 not found: ID does not exist" containerID="002d69b3127f9d9e7e69a921e37d6204154e51a4f0968b4f125e99db349e63b0" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.554075 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"002d69b3127f9d9e7e69a921e37d6204154e51a4f0968b4f125e99db349e63b0"} err="failed to get container status \"002d69b3127f9d9e7e69a921e37d6204154e51a4f0968b4f125e99db349e63b0\": rpc error: code = NotFound desc = could not find container \"002d69b3127f9d9e7e69a921e37d6204154e51a4f0968b4f125e99db349e63b0\": container with ID starting with 002d69b3127f9d9e7e69a921e37d6204154e51a4f0968b4f125e99db349e63b0 not found: ID does not exist" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.554096 4719 scope.go:117] "RemoveContainer" containerID="f4b7e08190f6fa8eb96de5922658ef441d2ae5f43438926bb2ce222e1923dcff" Nov 24 09:49:51 crc kubenswrapper[4719]: E1124 09:49:51.554406 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4b7e08190f6fa8eb96de5922658ef441d2ae5f43438926bb2ce222e1923dcff\": container with ID starting with f4b7e08190f6fa8eb96de5922658ef441d2ae5f43438926bb2ce222e1923dcff not found: ID does not exist" containerID="f4b7e08190f6fa8eb96de5922658ef441d2ae5f43438926bb2ce222e1923dcff" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.554427 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4b7e08190f6fa8eb96de5922658ef441d2ae5f43438926bb2ce222e1923dcff"} err="failed to get container status \"f4b7e08190f6fa8eb96de5922658ef441d2ae5f43438926bb2ce222e1923dcff\": rpc error: code = NotFound desc = could not find container \"f4b7e08190f6fa8eb96de5922658ef441d2ae5f43438926bb2ce222e1923dcff\": container with ID starting with f4b7e08190f6fa8eb96de5922658ef441d2ae5f43438926bb2ce222e1923dcff not found: ID does not exist" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.640301 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfvxh\" (UniqueName: \"kubernetes.io/projected/b6c26c2d-008f-4cc0-99db-80a8e21c3537-kube-api-access-sfvxh\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.649720 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b6c26c2d-008f-4cc0-99db-80a8e21c3537" (UID: "b6c26c2d-008f-4cc0-99db-80a8e21c3537"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.664582 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "b6c26c2d-008f-4cc0-99db-80a8e21c3537" (UID: "b6c26c2d-008f-4cc0-99db-80a8e21c3537"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.672559 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-config" (OuterVolumeSpecName: "config") pod "b6c26c2d-008f-4cc0-99db-80a8e21c3537" (UID: "b6c26c2d-008f-4cc0-99db-80a8e21c3537"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.680457 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b6c26c2d-008f-4cc0-99db-80a8e21c3537" (UID: "b6c26c2d-008f-4cc0-99db-80a8e21c3537"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.685964 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b6c26c2d-008f-4cc0-99db-80a8e21c3537" (UID: "b6c26c2d-008f-4cc0-99db-80a8e21c3537"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.741756 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.741780 4719 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-config\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.741790 4719 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.741801 4719 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.741809 4719 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6c26c2d-008f-4cc0-99db-80a8e21c3537-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.812957 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-w52mp"] Nov 24 09:49:51 crc kubenswrapper[4719]: I1124 09:49:51.834334 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-667ff9c869-w52mp"] Nov 24 09:49:52 crc kubenswrapper[4719]: I1124 09:49:52.468389 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"55001879-8601-4a7a-b3df-4b847f9b72e4","Type":"ContainerStarted","Data":"78652d204d4d31ea5c6e5cd3b92245a860ce021e6637d4d763869619e2316660"} Nov 24 09:49:52 crc kubenswrapper[4719]: I1124 09:49:52.468631 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"55001879-8601-4a7a-b3df-4b847f9b72e4","Type":"ContainerStarted","Data":"ce0f3c7fed23e8413f74630585f7ab48b5701e658a73556df693ccb854e24583"} Nov 24 09:49:52 crc kubenswrapper[4719]: I1124 09:49:52.533969 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c26c2d-008f-4cc0-99db-80a8e21c3537" path="/var/lib/kubelet/pods/b6c26c2d-008f-4cc0-99db-80a8e21c3537/volumes" Nov 24 09:49:54 crc kubenswrapper[4719]: I1124 09:49:54.001086 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.870378775 podStartE2EDuration="15.001054783s" podCreationTimestamp="2025-11-24 09:49:39 +0000 UTC" firstStartedPulling="2025-11-24 09:49:40.83115453 +0000 UTC m=+3357.162427792" lastFinishedPulling="2025-11-24 09:49:50.961830558 +0000 UTC m=+3367.293103800" observedRunningTime="2025-11-24 09:49:52.492653286 +0000 UTC m=+3368.823926538" watchObservedRunningTime="2025-11-24 09:49:54.001054783 +0000 UTC m=+3370.332328045" Nov 24 09:49:54 crc kubenswrapper[4719]: I1124 09:49:54.008406 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:49:54 crc kubenswrapper[4719]: I1124 09:49:54.008729 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="ceilometer-central-agent" containerID="cri-o://68b306888f5524ae2c072d6156995c841184149b57d112d2f15a78e6bae82ac3" gracePeriod=30 Nov 24 09:49:54 crc kubenswrapper[4719]: I1124 09:49:54.008776 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="proxy-httpd" containerID="cri-o://29d9969e3cd5ff0dc73abea3e92356e9f6ebb0cb6e8d5068348f0030f502f1d5" gracePeriod=30 Nov 24 09:49:54 crc kubenswrapper[4719]: I1124 09:49:54.008801 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="sg-core" containerID="cri-o://400edd73c098a1a6dafb0d4ca888f593bab289641aac9c609a6b5562d406bcfa" gracePeriod=30 Nov 24 09:49:54 crc kubenswrapper[4719]: I1124 09:49:54.008808 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="ceilometer-notification-agent" containerID="cri-o://4c9552b2f51e8194754c00e5b74df4f294fb35dd3caf9e2d9f19c6c7c5dc7935" gracePeriod=30 Nov 24 09:49:54 crc kubenswrapper[4719]: I1124 09:49:54.487634 4719 generic.go:334] "Generic (PLEG): container finished" podID="62091726-7f9c-439d-a39a-54ce59e0130b" containerID="400edd73c098a1a6dafb0d4ca888f593bab289641aac9c609a6b5562d406bcfa" exitCode=2 Nov 24 09:49:54 crc kubenswrapper[4719]: I1124 09:49:54.487676 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62091726-7f9c-439d-a39a-54ce59e0130b","Type":"ContainerDied","Data":"400edd73c098a1a6dafb0d4ca888f593bab289641aac9c609a6b5562d406bcfa"} Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.498831 4719 generic.go:334] "Generic (PLEG): container finished" podID="62091726-7f9c-439d-a39a-54ce59e0130b" containerID="29d9969e3cd5ff0dc73abea3e92356e9f6ebb0cb6e8d5068348f0030f502f1d5" exitCode=0 Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.499284 4719 generic.go:334] "Generic (PLEG): container finished" podID="62091726-7f9c-439d-a39a-54ce59e0130b" containerID="4c9552b2f51e8194754c00e5b74df4f294fb35dd3caf9e2d9f19c6c7c5dc7935" exitCode=0 Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.499297 4719 generic.go:334] "Generic (PLEG): container finished" podID="62091726-7f9c-439d-a39a-54ce59e0130b" containerID="68b306888f5524ae2c072d6156995c841184149b57d112d2f15a78e6bae82ac3" exitCode=0 Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.498908 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62091726-7f9c-439d-a39a-54ce59e0130b","Type":"ContainerDied","Data":"29d9969e3cd5ff0dc73abea3e92356e9f6ebb0cb6e8d5068348f0030f502f1d5"} Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.499336 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62091726-7f9c-439d-a39a-54ce59e0130b","Type":"ContainerDied","Data":"4c9552b2f51e8194754c00e5b74df4f294fb35dd3caf9e2d9f19c6c7c5dc7935"} Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.499351 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62091726-7f9c-439d-a39a-54ce59e0130b","Type":"ContainerDied","Data":"68b306888f5524ae2c072d6156995c841184149b57d112d2f15a78e6bae82ac3"} Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.499362 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"62091726-7f9c-439d-a39a-54ce59e0130b","Type":"ContainerDied","Data":"1ec7485d58cefbaf3cf8d282bfd5f7fe935eb8182179dbfd369ecd3e18fe4ed3"} Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.499371 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ec7485d58cefbaf3cf8d282bfd5f7fe935eb8182179dbfd369ecd3e18fe4ed3" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.521880 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.619398 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-sg-core-conf-yaml\") pod \"62091726-7f9c-439d-a39a-54ce59e0130b\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.619505 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62091726-7f9c-439d-a39a-54ce59e0130b-run-httpd\") pod \"62091726-7f9c-439d-a39a-54ce59e0130b\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.619622 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-combined-ca-bundle\") pod \"62091726-7f9c-439d-a39a-54ce59e0130b\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.619653 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-ceilometer-tls-certs\") pod \"62091726-7f9c-439d-a39a-54ce59e0130b\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.619741 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-config-data\") pod \"62091726-7f9c-439d-a39a-54ce59e0130b\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.619830 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62091726-7f9c-439d-a39a-54ce59e0130b-log-httpd\") pod \"62091726-7f9c-439d-a39a-54ce59e0130b\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.619871 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28qqc\" (UniqueName: \"kubernetes.io/projected/62091726-7f9c-439d-a39a-54ce59e0130b-kube-api-access-28qqc\") pod \"62091726-7f9c-439d-a39a-54ce59e0130b\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.619968 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-scripts\") pod \"62091726-7f9c-439d-a39a-54ce59e0130b\" (UID: \"62091726-7f9c-439d-a39a-54ce59e0130b\") " Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.623727 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62091726-7f9c-439d-a39a-54ce59e0130b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "62091726-7f9c-439d-a39a-54ce59e0130b" (UID: "62091726-7f9c-439d-a39a-54ce59e0130b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.625632 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62091726-7f9c-439d-a39a-54ce59e0130b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "62091726-7f9c-439d-a39a-54ce59e0130b" (UID: "62091726-7f9c-439d-a39a-54ce59e0130b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.631514 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-scripts" (OuterVolumeSpecName: "scripts") pod "62091726-7f9c-439d-a39a-54ce59e0130b" (UID: "62091726-7f9c-439d-a39a-54ce59e0130b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.653396 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62091726-7f9c-439d-a39a-54ce59e0130b-kube-api-access-28qqc" (OuterVolumeSpecName: "kube-api-access-28qqc") pod "62091726-7f9c-439d-a39a-54ce59e0130b" (UID: "62091726-7f9c-439d-a39a-54ce59e0130b"). InnerVolumeSpecName "kube-api-access-28qqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.684262 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "62091726-7f9c-439d-a39a-54ce59e0130b" (UID: "62091726-7f9c-439d-a39a-54ce59e0130b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.700228 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "62091726-7f9c-439d-a39a-54ce59e0130b" (UID: "62091726-7f9c-439d-a39a-54ce59e0130b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.723329 4719 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62091726-7f9c-439d-a39a-54ce59e0130b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.723354 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28qqc\" (UniqueName: \"kubernetes.io/projected/62091726-7f9c-439d-a39a-54ce59e0130b-kube-api-access-28qqc\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.723365 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.723372 4719 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.723380 4719 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62091726-7f9c-439d-a39a-54ce59e0130b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.723388 4719 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.782802 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62091726-7f9c-439d-a39a-54ce59e0130b" (UID: "62091726-7f9c-439d-a39a-54ce59e0130b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.808807 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-config-data" (OuterVolumeSpecName: "config-data") pod "62091726-7f9c-439d-a39a-54ce59e0130b" (UID: "62091726-7f9c-439d-a39a-54ce59e0130b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.826179 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:55 crc kubenswrapper[4719]: I1124 09:49:55.826211 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62091726-7f9c-439d-a39a-54ce59e0130b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.507302 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.547898 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.557439 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.571162 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:49:56 crc kubenswrapper[4719]: E1124 09:49:56.571681 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="proxy-httpd" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.571706 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="proxy-httpd" Nov 24 09:49:56 crc kubenswrapper[4719]: E1124 09:49:56.571725 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="ceilometer-central-agent" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.571733 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="ceilometer-central-agent" Nov 24 09:49:56 crc kubenswrapper[4719]: E1124 09:49:56.571742 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="sg-core" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.571749 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="sg-core" Nov 24 09:49:56 crc kubenswrapper[4719]: E1124 09:49:56.571759 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c26c2d-008f-4cc0-99db-80a8e21c3537" containerName="dnsmasq-dns" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.571765 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c26c2d-008f-4cc0-99db-80a8e21c3537" containerName="dnsmasq-dns" Nov 24 09:49:56 crc kubenswrapper[4719]: E1124 09:49:56.571782 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c26c2d-008f-4cc0-99db-80a8e21c3537" containerName="init" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.571789 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c26c2d-008f-4cc0-99db-80a8e21c3537" containerName="init" Nov 24 09:49:56 crc kubenswrapper[4719]: E1124 09:49:56.571802 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="ceilometer-notification-agent" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.571808 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="ceilometer-notification-agent" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.572024 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c26c2d-008f-4cc0-99db-80a8e21c3537" containerName="dnsmasq-dns" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.572054 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="ceilometer-notification-agent" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.572070 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="sg-core" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.572084 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="ceilometer-central-agent" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.572099 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" containerName="proxy-httpd" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.574256 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.576510 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.578336 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.578679 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.583975 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.640962 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.641165 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-config-data\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.641192 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.641211 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4htz\" (UniqueName: \"kubernetes.io/projected/dd478071-4e9d-402f-afa7-fbd28f489095-kube-api-access-r4htz\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.641263 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.641286 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-scripts\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.641302 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd478071-4e9d-402f-afa7-fbd28f489095-run-httpd\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.641328 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd478071-4e9d-402f-afa7-fbd28f489095-log-httpd\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.743919 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-config-data\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.744194 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.744285 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4htz\" (UniqueName: \"kubernetes.io/projected/dd478071-4e9d-402f-afa7-fbd28f489095-kube-api-access-r4htz\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.744416 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.744508 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-scripts\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.744604 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd478071-4e9d-402f-afa7-fbd28f489095-run-httpd\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.744724 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd478071-4e9d-402f-afa7-fbd28f489095-log-httpd\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.744931 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.746428 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd478071-4e9d-402f-afa7-fbd28f489095-run-httpd\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.746703 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd478071-4e9d-402f-afa7-fbd28f489095-log-httpd\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.750570 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.750995 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.752024 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-scripts\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.752397 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-config-data\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.759979 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd478071-4e9d-402f-afa7-fbd28f489095-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.766052 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4htz\" (UniqueName: \"kubernetes.io/projected/dd478071-4e9d-402f-afa7-fbd28f489095-kube-api-access-r4htz\") pod \"ceilometer-0\" (UID: \"dd478071-4e9d-402f-afa7-fbd28f489095\") " pod="openstack/ceilometer-0" Nov 24 09:49:56 crc kubenswrapper[4719]: I1124 09:49:56.895139 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 09:49:57 crc kubenswrapper[4719]: I1124 09:49:57.367394 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 09:49:57 crc kubenswrapper[4719]: W1124 09:49:57.367999 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd478071_4e9d_402f_afa7_fbd28f489095.slice/crio-7b14fa6a3740de76fe79b30e1d1d19749005c734a39a1ed3c93dce5b72c2c815 WatchSource:0}: Error finding container 7b14fa6a3740de76fe79b30e1d1d19749005c734a39a1ed3c93dce5b72c2c815: Status 404 returned error can't find the container with id 7b14fa6a3740de76fe79b30e1d1d19749005c734a39a1ed3c93dce5b72c2c815 Nov 24 09:49:57 crc kubenswrapper[4719]: I1124 09:49:57.516027 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd478071-4e9d-402f-afa7-fbd28f489095","Type":"ContainerStarted","Data":"7b14fa6a3740de76fe79b30e1d1d19749005c734a39a1ed3c93dce5b72c2c815"} Nov 24 09:49:58 crc kubenswrapper[4719]: I1124 09:49:58.535846 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62091726-7f9c-439d-a39a-54ce59e0130b" path="/var/lib/kubelet/pods/62091726-7f9c-439d-a39a-54ce59e0130b/volumes" Nov 24 09:49:58 crc kubenswrapper[4719]: I1124 09:49:58.537446 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd478071-4e9d-402f-afa7-fbd28f489095","Type":"ContainerStarted","Data":"a24d41eb917f7a57481d1f7dfe00f3560ddc374fc11e5a545382485c0ef09d70"} Nov 24 09:49:59 crc kubenswrapper[4719]: I1124 09:49:59.542198 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd478071-4e9d-402f-afa7-fbd28f489095","Type":"ContainerStarted","Data":"984b5fa20ddb9c984eeb33cae9e6f434c759dc40b3f8f8b8daf7ae8e64c97152"} Nov 24 09:49:59 crc kubenswrapper[4719]: I1124 09:49:59.884442 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 24 09:50:00 crc kubenswrapper[4719]: I1124 09:50:00.554731 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd478071-4e9d-402f-afa7-fbd28f489095","Type":"ContainerStarted","Data":"6d6337b08ea8f76649d97f7bdbd5155993381c837d38a6ce5f76bc8dc11a9e41"} Nov 24 09:50:02 crc kubenswrapper[4719]: I1124 09:50:02.269288 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 24 09:50:02 crc kubenswrapper[4719]: I1124 09:50:02.319529 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 09:50:02 crc kubenswrapper[4719]: I1124 09:50:02.577104 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd478071-4e9d-402f-afa7-fbd28f489095","Type":"ContainerStarted","Data":"9543a41b5c2cf8502735890ff9d2970b7d7dab6a3281482d9f7719ab45c67865"} Nov 24 09:50:02 crc kubenswrapper[4719]: I1124 09:50:02.577255 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="63322e98-36aa-491e-9ba6-ec47b452f3aa" containerName="manila-scheduler" containerID="cri-o://73ed274a892e64ab5b517b6d026a6a25956bab591773753f15bc96828b53db60" gracePeriod=30 Nov 24 09:50:02 crc kubenswrapper[4719]: I1124 09:50:02.577817 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="63322e98-36aa-491e-9ba6-ec47b452f3aa" containerName="probe" containerID="cri-o://3f0e5189567e7fb0c7567813b3f096b5d299316c30cd250f9a7c0ed70440a7db" gracePeriod=30 Nov 24 09:50:02 crc kubenswrapper[4719]: I1124 09:50:02.613998 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.327836444 podStartE2EDuration="6.613979115s" podCreationTimestamp="2025-11-24 09:49:56 +0000 UTC" firstStartedPulling="2025-11-24 09:49:57.370559543 +0000 UTC m=+3373.701832805" lastFinishedPulling="2025-11-24 09:50:01.656702224 +0000 UTC m=+3377.987975476" observedRunningTime="2025-11-24 09:50:02.601020495 +0000 UTC m=+3378.932293757" watchObservedRunningTime="2025-11-24 09:50:02.613979115 +0000 UTC m=+3378.945252367" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.611411 4719 generic.go:334] "Generic (PLEG): container finished" podID="63322e98-36aa-491e-9ba6-ec47b452f3aa" containerID="3f0e5189567e7fb0c7567813b3f096b5d299316c30cd250f9a7c0ed70440a7db" exitCode=0 Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.611689 4719 generic.go:334] "Generic (PLEG): container finished" podID="63322e98-36aa-491e-9ba6-ec47b452f3aa" containerID="73ed274a892e64ab5b517b6d026a6a25956bab591773753f15bc96828b53db60" exitCode=0 Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.611626 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"63322e98-36aa-491e-9ba6-ec47b452f3aa","Type":"ContainerDied","Data":"3f0e5189567e7fb0c7567813b3f096b5d299316c30cd250f9a7c0ed70440a7db"} Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.613589 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"63322e98-36aa-491e-9ba6-ec47b452f3aa","Type":"ContainerDied","Data":"73ed274a892e64ab5b517b6d026a6a25956bab591773753f15bc96828b53db60"} Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.613723 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"63322e98-36aa-491e-9ba6-ec47b452f3aa","Type":"ContainerDied","Data":"becaf11134cfc67402d94f29e5aabf37667e7837e9e3eab6dd487965fb462ccc"} Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.613734 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="becaf11134cfc67402d94f29e5aabf37667e7837e9e3eab6dd487965fb462ccc" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.613782 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.665435 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.691448 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-config-data\") pod \"63322e98-36aa-491e-9ba6-ec47b452f3aa\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.691867 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63322e98-36aa-491e-9ba6-ec47b452f3aa-etc-machine-id\") pod \"63322e98-36aa-491e-9ba6-ec47b452f3aa\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.692012 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-scripts\") pod \"63322e98-36aa-491e-9ba6-ec47b452f3aa\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.692137 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-combined-ca-bundle\") pod \"63322e98-36aa-491e-9ba6-ec47b452f3aa\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.692232 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w2wc\" (UniqueName: \"kubernetes.io/projected/63322e98-36aa-491e-9ba6-ec47b452f3aa-kube-api-access-4w2wc\") pod \"63322e98-36aa-491e-9ba6-ec47b452f3aa\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.692351 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-config-data-custom\") pod \"63322e98-36aa-491e-9ba6-ec47b452f3aa\" (UID: \"63322e98-36aa-491e-9ba6-ec47b452f3aa\") " Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.692758 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63322e98-36aa-491e-9ba6-ec47b452f3aa-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "63322e98-36aa-491e-9ba6-ec47b452f3aa" (UID: "63322e98-36aa-491e-9ba6-ec47b452f3aa"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.693243 4719 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63322e98-36aa-491e-9ba6-ec47b452f3aa-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.698816 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-scripts" (OuterVolumeSpecName: "scripts") pod "63322e98-36aa-491e-9ba6-ec47b452f3aa" (UID: "63322e98-36aa-491e-9ba6-ec47b452f3aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.704136 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "63322e98-36aa-491e-9ba6-ec47b452f3aa" (UID: "63322e98-36aa-491e-9ba6-ec47b452f3aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.711444 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63322e98-36aa-491e-9ba6-ec47b452f3aa-kube-api-access-4w2wc" (OuterVolumeSpecName: "kube-api-access-4w2wc") pod "63322e98-36aa-491e-9ba6-ec47b452f3aa" (UID: "63322e98-36aa-491e-9ba6-ec47b452f3aa"). InnerVolumeSpecName "kube-api-access-4w2wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.795098 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.795145 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w2wc\" (UniqueName: \"kubernetes.io/projected/63322e98-36aa-491e-9ba6-ec47b452f3aa-kube-api-access-4w2wc\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.795159 4719 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.807559 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63322e98-36aa-491e-9ba6-ec47b452f3aa" (UID: "63322e98-36aa-491e-9ba6-ec47b452f3aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.862960 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-config-data" (OuterVolumeSpecName: "config-data") pod "63322e98-36aa-491e-9ba6-ec47b452f3aa" (UID: "63322e98-36aa-491e-9ba6-ec47b452f3aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.897067 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:03 crc kubenswrapper[4719]: I1124 09:50:03.897100 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63322e98-36aa-491e-9ba6-ec47b452f3aa-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.618935 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.650732 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.662328 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.673395 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 09:50:04 crc kubenswrapper[4719]: E1124 09:50:04.673844 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63322e98-36aa-491e-9ba6-ec47b452f3aa" containerName="probe" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.673862 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="63322e98-36aa-491e-9ba6-ec47b452f3aa" containerName="probe" Nov 24 09:50:04 crc kubenswrapper[4719]: E1124 09:50:04.673880 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63322e98-36aa-491e-9ba6-ec47b452f3aa" containerName="manila-scheduler" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.673887 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="63322e98-36aa-491e-9ba6-ec47b452f3aa" containerName="manila-scheduler" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.674146 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="63322e98-36aa-491e-9ba6-ec47b452f3aa" containerName="manila-scheduler" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.674180 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="63322e98-36aa-491e-9ba6-ec47b452f3aa" containerName="probe" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.675403 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.677188 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.692711 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.714408 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e101dc58-4d71-4456-aa34-e215690b34bf-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.714524 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e101dc58-4d71-4456-aa34-e215690b34bf-scripts\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.714652 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e101dc58-4d71-4456-aa34-e215690b34bf-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.714672 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kdc2\" (UniqueName: \"kubernetes.io/projected/e101dc58-4d71-4456-aa34-e215690b34bf-kube-api-access-8kdc2\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.714775 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e101dc58-4d71-4456-aa34-e215690b34bf-config-data\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.714906 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e101dc58-4d71-4456-aa34-e215690b34bf-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.816231 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e101dc58-4d71-4456-aa34-e215690b34bf-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.816317 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e101dc58-4d71-4456-aa34-e215690b34bf-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.816347 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e101dc58-4d71-4456-aa34-e215690b34bf-scripts\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.816396 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e101dc58-4d71-4456-aa34-e215690b34bf-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.816412 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e101dc58-4d71-4456-aa34-e215690b34bf-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.816483 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kdc2\" (UniqueName: \"kubernetes.io/projected/e101dc58-4d71-4456-aa34-e215690b34bf-kube-api-access-8kdc2\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.816572 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e101dc58-4d71-4456-aa34-e215690b34bf-config-data\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.828627 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e101dc58-4d71-4456-aa34-e215690b34bf-scripts\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.828971 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e101dc58-4d71-4456-aa34-e215690b34bf-config-data\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.829233 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e101dc58-4d71-4456-aa34-e215690b34bf-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.832913 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kdc2\" (UniqueName: \"kubernetes.io/projected/e101dc58-4d71-4456-aa34-e215690b34bf-kube-api-access-8kdc2\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.833934 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e101dc58-4d71-4456-aa34-e215690b34bf-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"e101dc58-4d71-4456-aa34-e215690b34bf\") " pod="openstack/manila-scheduler-0" Nov 24 09:50:04 crc kubenswrapper[4719]: I1124 09:50:04.993916 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 24 09:50:05 crc kubenswrapper[4719]: W1124 09:50:05.509734 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode101dc58_4d71_4456_aa34_e215690b34bf.slice/crio-6135c0fcf6c8236f60438f2991eff63f7043e6799c7bba76b721fac3a4b5affe WatchSource:0}: Error finding container 6135c0fcf6c8236f60438f2991eff63f7043e6799c7bba76b721fac3a4b5affe: Status 404 returned error can't find the container with id 6135c0fcf6c8236f60438f2991eff63f7043e6799c7bba76b721fac3a4b5affe Nov 24 09:50:05 crc kubenswrapper[4719]: I1124 09:50:05.520325 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 24 09:50:05 crc kubenswrapper[4719]: I1124 09:50:05.642547 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e101dc58-4d71-4456-aa34-e215690b34bf","Type":"ContainerStarted","Data":"6135c0fcf6c8236f60438f2991eff63f7043e6799c7bba76b721fac3a4b5affe"} Nov 24 09:50:06 crc kubenswrapper[4719]: I1124 09:50:06.554157 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63322e98-36aa-491e-9ba6-ec47b452f3aa" path="/var/lib/kubelet/pods/63322e98-36aa-491e-9ba6-ec47b452f3aa/volumes" Nov 24 09:50:06 crc kubenswrapper[4719]: I1124 09:50:06.654838 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e101dc58-4d71-4456-aa34-e215690b34bf","Type":"ContainerStarted","Data":"4fb5ce481d56eecc6c50e343efee451579dacdcd823d03e94abd8be96ec9b013"} Nov 24 09:50:08 crc kubenswrapper[4719]: I1124 09:50:08.905357 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Nov 24 09:50:09 crc kubenswrapper[4719]: I1124 09:50:09.686441 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"e101dc58-4d71-4456-aa34-e215690b34bf","Type":"ContainerStarted","Data":"394fc184643b1bcfbddef23fd7a4f80d112467b5874786c7c04ec30681596a15"} Nov 24 09:50:11 crc kubenswrapper[4719]: I1124 09:50:11.506701 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 24 09:50:11 crc kubenswrapper[4719]: I1124 09:50:11.571515 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 09:50:11 crc kubenswrapper[4719]: I1124 09:50:11.718363 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="55001879-8601-4a7a-b3df-4b847f9b72e4" containerName="manila-share" containerID="cri-o://ce0f3c7fed23e8413f74630585f7ab48b5701e658a73556df693ccb854e24583" gracePeriod=30 Nov 24 09:50:11 crc kubenswrapper[4719]: I1124 09:50:11.718482 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="55001879-8601-4a7a-b3df-4b847f9b72e4" containerName="probe" containerID="cri-o://78652d204d4d31ea5c6e5cd3b92245a860ce021e6637d4d763869619e2316660" gracePeriod=30 Nov 24 09:50:11 crc kubenswrapper[4719]: I1124 09:50:11.747810 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=7.747788251 podStartE2EDuration="7.747788251s" podCreationTimestamp="2025-11-24 09:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:50:11.744863258 +0000 UTC m=+3388.076136520" watchObservedRunningTime="2025-11-24 09:50:11.747788251 +0000 UTC m=+3388.079061503" Nov 24 09:50:12 crc kubenswrapper[4719]: I1124 09:50:12.197671 4719 scope.go:117] "RemoveContainer" containerID="4c9552b2f51e8194754c00e5b74df4f294fb35dd3caf9e2d9f19c6c7c5dc7935" Nov 24 09:50:12 crc kubenswrapper[4719]: I1124 09:50:12.221602 4719 scope.go:117] "RemoveContainer" containerID="29d9969e3cd5ff0dc73abea3e92356e9f6ebb0cb6e8d5068348f0030f502f1d5" Nov 24 09:50:12 crc kubenswrapper[4719]: I1124 09:50:12.248992 4719 scope.go:117] "RemoveContainer" containerID="400edd73c098a1a6dafb0d4ca888f593bab289641aac9c609a6b5562d406bcfa" Nov 24 09:50:12 crc kubenswrapper[4719]: I1124 09:50:12.272106 4719 scope.go:117] "RemoveContainer" containerID="68b306888f5524ae2c072d6156995c841184149b57d112d2f15a78e6bae82ac3" Nov 24 09:50:12 crc kubenswrapper[4719]: I1124 09:50:12.730737 4719 generic.go:334] "Generic (PLEG): container finished" podID="55001879-8601-4a7a-b3df-4b847f9b72e4" containerID="78652d204d4d31ea5c6e5cd3b92245a860ce021e6637d4d763869619e2316660" exitCode=0 Nov 24 09:50:12 crc kubenswrapper[4719]: I1124 09:50:12.730774 4719 generic.go:334] "Generic (PLEG): container finished" podID="55001879-8601-4a7a-b3df-4b847f9b72e4" containerID="ce0f3c7fed23e8413f74630585f7ab48b5701e658a73556df693ccb854e24583" exitCode=1 Nov 24 09:50:12 crc kubenswrapper[4719]: I1124 09:50:12.730798 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"55001879-8601-4a7a-b3df-4b847f9b72e4","Type":"ContainerDied","Data":"78652d204d4d31ea5c6e5cd3b92245a860ce021e6637d4d763869619e2316660"} Nov 24 09:50:12 crc kubenswrapper[4719]: I1124 09:50:12.730828 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"55001879-8601-4a7a-b3df-4b847f9b72e4","Type":"ContainerDied","Data":"ce0f3c7fed23e8413f74630585f7ab48b5701e658a73556df693ccb854e24583"} Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.128300 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.219055 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-config-data\") pod \"55001879-8601-4a7a-b3df-4b847f9b72e4\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.219117 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-combined-ca-bundle\") pod \"55001879-8601-4a7a-b3df-4b847f9b72e4\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.219153 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4jvr\" (UniqueName: \"kubernetes.io/projected/55001879-8601-4a7a-b3df-4b847f9b72e4-kube-api-access-z4jvr\") pod \"55001879-8601-4a7a-b3df-4b847f9b72e4\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.219171 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-scripts\") pod \"55001879-8601-4a7a-b3df-4b847f9b72e4\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.219197 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55001879-8601-4a7a-b3df-4b847f9b72e4-etc-machine-id\") pod \"55001879-8601-4a7a-b3df-4b847f9b72e4\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.219304 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-config-data-custom\") pod \"55001879-8601-4a7a-b3df-4b847f9b72e4\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.219322 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/55001879-8601-4a7a-b3df-4b847f9b72e4-var-lib-manila\") pod \"55001879-8601-4a7a-b3df-4b847f9b72e4\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.219386 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/55001879-8601-4a7a-b3df-4b847f9b72e4-ceph\") pod \"55001879-8601-4a7a-b3df-4b847f9b72e4\" (UID: \"55001879-8601-4a7a-b3df-4b847f9b72e4\") " Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.219761 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55001879-8601-4a7a-b3df-4b847f9b72e4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "55001879-8601-4a7a-b3df-4b847f9b72e4" (UID: "55001879-8601-4a7a-b3df-4b847f9b72e4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.219839 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55001879-8601-4a7a-b3df-4b847f9b72e4-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "55001879-8601-4a7a-b3df-4b847f9b72e4" (UID: "55001879-8601-4a7a-b3df-4b847f9b72e4"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.224564 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "55001879-8601-4a7a-b3df-4b847f9b72e4" (UID: "55001879-8601-4a7a-b3df-4b847f9b72e4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.225013 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55001879-8601-4a7a-b3df-4b847f9b72e4-kube-api-access-z4jvr" (OuterVolumeSpecName: "kube-api-access-z4jvr") pod "55001879-8601-4a7a-b3df-4b847f9b72e4" (UID: "55001879-8601-4a7a-b3df-4b847f9b72e4"). InnerVolumeSpecName "kube-api-access-z4jvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.234169 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55001879-8601-4a7a-b3df-4b847f9b72e4-ceph" (OuterVolumeSpecName: "ceph") pod "55001879-8601-4a7a-b3df-4b847f9b72e4" (UID: "55001879-8601-4a7a-b3df-4b847f9b72e4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.238790 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-scripts" (OuterVolumeSpecName: "scripts") pod "55001879-8601-4a7a-b3df-4b847f9b72e4" (UID: "55001879-8601-4a7a-b3df-4b847f9b72e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.273297 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55001879-8601-4a7a-b3df-4b847f9b72e4" (UID: "55001879-8601-4a7a-b3df-4b847f9b72e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.321564 4719 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.321880 4719 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/55001879-8601-4a7a-b3df-4b847f9b72e4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.321951 4719 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.322022 4719 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/55001879-8601-4a7a-b3df-4b847f9b72e4-var-lib-manila\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.322130 4719 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/55001879-8601-4a7a-b3df-4b847f9b72e4-ceph\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.322207 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.322276 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4jvr\" (UniqueName: \"kubernetes.io/projected/55001879-8601-4a7a-b3df-4b847f9b72e4-kube-api-access-z4jvr\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.329020 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-config-data" (OuterVolumeSpecName: "config-data") pod "55001879-8601-4a7a-b3df-4b847f9b72e4" (UID: "55001879-8601-4a7a-b3df-4b847f9b72e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.424179 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55001879-8601-4a7a-b3df-4b847f9b72e4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.743388 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"55001879-8601-4a7a-b3df-4b847f9b72e4","Type":"ContainerDied","Data":"3b3b019b504e268caa007bb06adf76c198ae36c5d120d9bb6898c9e53d248ce4"} Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.743486 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.743722 4719 scope.go:117] "RemoveContainer" containerID="78652d204d4d31ea5c6e5cd3b92245a860ce021e6637d4d763869619e2316660" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.771313 4719 scope.go:117] "RemoveContainer" containerID="ce0f3c7fed23e8413f74630585f7ab48b5701e658a73556df693ccb854e24583" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.788626 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.797003 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.810429 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 09:50:13 crc kubenswrapper[4719]: E1124 09:50:13.810902 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55001879-8601-4a7a-b3df-4b847f9b72e4" containerName="manila-share" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.810924 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="55001879-8601-4a7a-b3df-4b847f9b72e4" containerName="manila-share" Nov 24 09:50:13 crc kubenswrapper[4719]: E1124 09:50:13.810946 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55001879-8601-4a7a-b3df-4b847f9b72e4" containerName="probe" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.810954 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="55001879-8601-4a7a-b3df-4b847f9b72e4" containerName="probe" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.811203 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="55001879-8601-4a7a-b3df-4b847f9b72e4" containerName="probe" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.811232 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="55001879-8601-4a7a-b3df-4b847f9b72e4" containerName="manila-share" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.812471 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.814093 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.831185 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.932236 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/880fcfd8-382a-4865-997b-203e11aad18d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.932298 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/880fcfd8-382a-4865-997b-203e11aad18d-config-data\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.932330 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/880fcfd8-382a-4865-997b-203e11aad18d-scripts\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.932354 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/880fcfd8-382a-4865-997b-203e11aad18d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.932590 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/880fcfd8-382a-4865-997b-203e11aad18d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.932642 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/880fcfd8-382a-4865-997b-203e11aad18d-ceph\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.932720 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/880fcfd8-382a-4865-997b-203e11aad18d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:13 crc kubenswrapper[4719]: I1124 09:50:13.932802 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phlmb\" (UniqueName: \"kubernetes.io/projected/880fcfd8-382a-4865-997b-203e11aad18d-kube-api-access-phlmb\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.034390 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phlmb\" (UniqueName: \"kubernetes.io/projected/880fcfd8-382a-4865-997b-203e11aad18d-kube-api-access-phlmb\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.034561 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/880fcfd8-382a-4865-997b-203e11aad18d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.034593 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/880fcfd8-382a-4865-997b-203e11aad18d-config-data\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.034678 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/880fcfd8-382a-4865-997b-203e11aad18d-scripts\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.034702 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/880fcfd8-382a-4865-997b-203e11aad18d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.034640 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/880fcfd8-382a-4865-997b-203e11aad18d-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.035492 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/880fcfd8-382a-4865-997b-203e11aad18d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.035533 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/880fcfd8-382a-4865-997b-203e11aad18d-ceph\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.035574 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/880fcfd8-382a-4865-997b-203e11aad18d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.035584 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/880fcfd8-382a-4865-997b-203e11aad18d-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.039154 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/880fcfd8-382a-4865-997b-203e11aad18d-scripts\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.039292 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/880fcfd8-382a-4865-997b-203e11aad18d-ceph\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.040232 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/880fcfd8-382a-4865-997b-203e11aad18d-config-data\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.040668 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/880fcfd8-382a-4865-997b-203e11aad18d-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.040679 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/880fcfd8-382a-4865-997b-203e11aad18d-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.061549 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phlmb\" (UniqueName: \"kubernetes.io/projected/880fcfd8-382a-4865-997b-203e11aad18d-kube-api-access-phlmb\") pod \"manila-share-share1-0\" (UID: \"880fcfd8-382a-4865-997b-203e11aad18d\") " pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.133488 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.531918 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55001879-8601-4a7a-b3df-4b847f9b72e4" path="/var/lib/kubelet/pods/55001879-8601-4a7a-b3df-4b847f9b72e4/volumes" Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.700730 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.763704 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"880fcfd8-382a-4865-997b-203e11aad18d","Type":"ContainerStarted","Data":"6042900b633f7d39bc2412db366a552f06ed4d746cff2e1457817b02bccdd011"} Nov 24 09:50:14 crc kubenswrapper[4719]: I1124 09:50:14.994601 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 24 09:50:15 crc kubenswrapper[4719]: I1124 09:50:15.779428 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"880fcfd8-382a-4865-997b-203e11aad18d","Type":"ContainerStarted","Data":"a36f26da9552624af04bbf183e663a13bd90638ada49c8a9607e0334311b4858"} Nov 24 09:50:15 crc kubenswrapper[4719]: I1124 09:50:15.780512 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"880fcfd8-382a-4865-997b-203e11aad18d","Type":"ContainerStarted","Data":"c32e81b95a54cc69bd4eb0ba487c5edbb9315bf56e881565e511923854e3efc5"} Nov 24 09:50:15 crc kubenswrapper[4719]: I1124 09:50:15.810179 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.810156638 podStartE2EDuration="2.810156638s" podCreationTimestamp="2025-11-24 09:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 09:50:15.801462569 +0000 UTC m=+3392.132735841" watchObservedRunningTime="2025-11-24 09:50:15.810156638 +0000 UTC m=+3392.141429890" Nov 24 09:50:24 crc kubenswrapper[4719]: I1124 09:50:24.133620 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 24 09:50:26 crc kubenswrapper[4719]: I1124 09:50:26.492309 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 24 09:50:26 crc kubenswrapper[4719]: I1124 09:50:26.917855 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 09:50:33 crc kubenswrapper[4719]: I1124 09:50:33.929131 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cvbqp"] Nov 24 09:50:33 crc kubenswrapper[4719]: I1124 09:50:33.931532 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:50:33 crc kubenswrapper[4719]: I1124 09:50:33.944528 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cvbqp"] Nov 24 09:50:34 crc kubenswrapper[4719]: I1124 09:50:34.079421 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80004102-824c-4ee7-bc4f-f4def8fe810d-catalog-content\") pod \"certified-operators-cvbqp\" (UID: \"80004102-824c-4ee7-bc4f-f4def8fe810d\") " pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:50:34 crc kubenswrapper[4719]: I1124 09:50:34.080127 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w79mz\" (UniqueName: \"kubernetes.io/projected/80004102-824c-4ee7-bc4f-f4def8fe810d-kube-api-access-w79mz\") pod \"certified-operators-cvbqp\" (UID: \"80004102-824c-4ee7-bc4f-f4def8fe810d\") " pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:50:34 crc kubenswrapper[4719]: I1124 09:50:34.080246 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80004102-824c-4ee7-bc4f-f4def8fe810d-utilities\") pod \"certified-operators-cvbqp\" (UID: \"80004102-824c-4ee7-bc4f-f4def8fe810d\") " pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:50:34 crc kubenswrapper[4719]: I1124 09:50:34.181018 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80004102-824c-4ee7-bc4f-f4def8fe810d-catalog-content\") pod \"certified-operators-cvbqp\" (UID: \"80004102-824c-4ee7-bc4f-f4def8fe810d\") " pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:50:34 crc kubenswrapper[4719]: I1124 09:50:34.181082 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w79mz\" (UniqueName: \"kubernetes.io/projected/80004102-824c-4ee7-bc4f-f4def8fe810d-kube-api-access-w79mz\") pod \"certified-operators-cvbqp\" (UID: \"80004102-824c-4ee7-bc4f-f4def8fe810d\") " pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:50:34 crc kubenswrapper[4719]: I1124 09:50:34.181114 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80004102-824c-4ee7-bc4f-f4def8fe810d-utilities\") pod \"certified-operators-cvbqp\" (UID: \"80004102-824c-4ee7-bc4f-f4def8fe810d\") " pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:50:34 crc kubenswrapper[4719]: I1124 09:50:34.181591 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80004102-824c-4ee7-bc4f-f4def8fe810d-utilities\") pod \"certified-operators-cvbqp\" (UID: \"80004102-824c-4ee7-bc4f-f4def8fe810d\") " pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:50:34 crc kubenswrapper[4719]: I1124 09:50:34.181929 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80004102-824c-4ee7-bc4f-f4def8fe810d-catalog-content\") pod \"certified-operators-cvbqp\" (UID: \"80004102-824c-4ee7-bc4f-f4def8fe810d\") " pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:50:34 crc kubenswrapper[4719]: I1124 09:50:34.203130 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w79mz\" (UniqueName: \"kubernetes.io/projected/80004102-824c-4ee7-bc4f-f4def8fe810d-kube-api-access-w79mz\") pod \"certified-operators-cvbqp\" (UID: \"80004102-824c-4ee7-bc4f-f4def8fe810d\") " pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:50:34 crc kubenswrapper[4719]: I1124 09:50:34.258273 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:50:34 crc kubenswrapper[4719]: I1124 09:50:34.769956 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cvbqp"] Nov 24 09:50:34 crc kubenswrapper[4719]: I1124 09:50:34.999006 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvbqp" event={"ID":"80004102-824c-4ee7-bc4f-f4def8fe810d","Type":"ContainerStarted","Data":"e67d2a96bbae703cf69e8e712faa58883d5c116829e3b7200aed0bdd72ab47d5"} Nov 24 09:50:35 crc kubenswrapper[4719]: I1124 09:50:35.804531 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 24 09:50:36 crc kubenswrapper[4719]: I1124 09:50:36.010592 4719 generic.go:334] "Generic (PLEG): container finished" podID="80004102-824c-4ee7-bc4f-f4def8fe810d" containerID="0628b4dc6981dff26790cb78291472498abd7adaf78423504086a69d0b3a4c7a" exitCode=0 Nov 24 09:50:36 crc kubenswrapper[4719]: I1124 09:50:36.012388 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvbqp" event={"ID":"80004102-824c-4ee7-bc4f-f4def8fe810d","Type":"ContainerDied","Data":"0628b4dc6981dff26790cb78291472498abd7adaf78423504086a69d0b3a4c7a"} Nov 24 09:50:38 crc kubenswrapper[4719]: I1124 09:50:38.030370 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvbqp" event={"ID":"80004102-824c-4ee7-bc4f-f4def8fe810d","Type":"ContainerStarted","Data":"af5b371b4d1c18258f7985c67ce0f15d1ca638182514a34dcef2ee32df5d8795"} Nov 24 09:50:41 crc kubenswrapper[4719]: I1124 09:50:41.060994 4719 generic.go:334] "Generic (PLEG): container finished" podID="80004102-824c-4ee7-bc4f-f4def8fe810d" containerID="af5b371b4d1c18258f7985c67ce0f15d1ca638182514a34dcef2ee32df5d8795" exitCode=0 Nov 24 09:50:41 crc kubenswrapper[4719]: I1124 09:50:41.061078 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvbqp" event={"ID":"80004102-824c-4ee7-bc4f-f4def8fe810d","Type":"ContainerDied","Data":"af5b371b4d1c18258f7985c67ce0f15d1ca638182514a34dcef2ee32df5d8795"} Nov 24 09:50:42 crc kubenswrapper[4719]: I1124 09:50:42.070514 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvbqp" event={"ID":"80004102-824c-4ee7-bc4f-f4def8fe810d","Type":"ContainerStarted","Data":"af04baa8a2435bb697336b3c13392ee4ef58f31364b2b8efe80341534314639d"} Nov 24 09:50:42 crc kubenswrapper[4719]: I1124 09:50:42.092975 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cvbqp" podStartSLOduration=3.511718888 podStartE2EDuration="9.092960261s" podCreationTimestamp="2025-11-24 09:50:33 +0000 UTC" firstStartedPulling="2025-11-24 09:50:36.013096011 +0000 UTC m=+3412.344369263" lastFinishedPulling="2025-11-24 09:50:41.594337344 +0000 UTC m=+3417.925610636" observedRunningTime="2025-11-24 09:50:42.09187659 +0000 UTC m=+3418.423149862" watchObservedRunningTime="2025-11-24 09:50:42.092960261 +0000 UTC m=+3418.424233503" Nov 24 09:50:44 crc kubenswrapper[4719]: I1124 09:50:44.259572 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:50:44 crc kubenswrapper[4719]: I1124 09:50:44.260215 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:50:45 crc kubenswrapper[4719]: I1124 09:50:45.309898 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cvbqp" podUID="80004102-824c-4ee7-bc4f-f4def8fe810d" containerName="registry-server" probeResult="failure" output=< Nov 24 09:50:45 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:50:45 crc kubenswrapper[4719]: > Nov 24 09:50:55 crc kubenswrapper[4719]: I1124 09:50:55.309392 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cvbqp" podUID="80004102-824c-4ee7-bc4f-f4def8fe810d" containerName="registry-server" probeResult="failure" output=< Nov 24 09:50:55 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:50:55 crc kubenswrapper[4719]: > Nov 24 09:51:04 crc kubenswrapper[4719]: I1124 09:51:04.312469 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:51:04 crc kubenswrapper[4719]: I1124 09:51:04.363368 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:51:04 crc kubenswrapper[4719]: I1124 09:51:04.562518 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:51:04 crc kubenswrapper[4719]: I1124 09:51:04.562819 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:51:05 crc kubenswrapper[4719]: I1124 09:51:05.134009 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cvbqp"] Nov 24 09:51:06 crc kubenswrapper[4719]: I1124 09:51:06.316049 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cvbqp" podUID="80004102-824c-4ee7-bc4f-f4def8fe810d" containerName="registry-server" containerID="cri-o://af04baa8a2435bb697336b3c13392ee4ef58f31364b2b8efe80341534314639d" gracePeriod=2 Nov 24 09:51:06 crc kubenswrapper[4719]: I1124 09:51:06.881295 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.037747 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80004102-824c-4ee7-bc4f-f4def8fe810d-utilities\") pod \"80004102-824c-4ee7-bc4f-f4def8fe810d\" (UID: \"80004102-824c-4ee7-bc4f-f4def8fe810d\") " Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.037810 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80004102-824c-4ee7-bc4f-f4def8fe810d-catalog-content\") pod \"80004102-824c-4ee7-bc4f-f4def8fe810d\" (UID: \"80004102-824c-4ee7-bc4f-f4def8fe810d\") " Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.037914 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w79mz\" (UniqueName: \"kubernetes.io/projected/80004102-824c-4ee7-bc4f-f4def8fe810d-kube-api-access-w79mz\") pod \"80004102-824c-4ee7-bc4f-f4def8fe810d\" (UID: \"80004102-824c-4ee7-bc4f-f4def8fe810d\") " Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.039459 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80004102-824c-4ee7-bc4f-f4def8fe810d-utilities" (OuterVolumeSpecName: "utilities") pod "80004102-824c-4ee7-bc4f-f4def8fe810d" (UID: "80004102-824c-4ee7-bc4f-f4def8fe810d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.044304 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80004102-824c-4ee7-bc4f-f4def8fe810d-kube-api-access-w79mz" (OuterVolumeSpecName: "kube-api-access-w79mz") pod "80004102-824c-4ee7-bc4f-f4def8fe810d" (UID: "80004102-824c-4ee7-bc4f-f4def8fe810d"). InnerVolumeSpecName "kube-api-access-w79mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.047335 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80004102-824c-4ee7-bc4f-f4def8fe810d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.047415 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w79mz\" (UniqueName: \"kubernetes.io/projected/80004102-824c-4ee7-bc4f-f4def8fe810d-kube-api-access-w79mz\") on node \"crc\" DevicePath \"\"" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.093197 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80004102-824c-4ee7-bc4f-f4def8fe810d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80004102-824c-4ee7-bc4f-f4def8fe810d" (UID: "80004102-824c-4ee7-bc4f-f4def8fe810d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.148943 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80004102-824c-4ee7-bc4f-f4def8fe810d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.327310 4719 generic.go:334] "Generic (PLEG): container finished" podID="80004102-824c-4ee7-bc4f-f4def8fe810d" containerID="af04baa8a2435bb697336b3c13392ee4ef58f31364b2b8efe80341534314639d" exitCode=0 Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.327357 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvbqp" event={"ID":"80004102-824c-4ee7-bc4f-f4def8fe810d","Type":"ContainerDied","Data":"af04baa8a2435bb697336b3c13392ee4ef58f31364b2b8efe80341534314639d"} Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.327363 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cvbqp" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.327387 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cvbqp" event={"ID":"80004102-824c-4ee7-bc4f-f4def8fe810d","Type":"ContainerDied","Data":"e67d2a96bbae703cf69e8e712faa58883d5c116829e3b7200aed0bdd72ab47d5"} Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.327406 4719 scope.go:117] "RemoveContainer" containerID="af04baa8a2435bb697336b3c13392ee4ef58f31364b2b8efe80341534314639d" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.374157 4719 scope.go:117] "RemoveContainer" containerID="af5b371b4d1c18258f7985c67ce0f15d1ca638182514a34dcef2ee32df5d8795" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.378337 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cvbqp"] Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.386821 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cvbqp"] Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.404521 4719 scope.go:117] "RemoveContainer" containerID="0628b4dc6981dff26790cb78291472498abd7adaf78423504086a69d0b3a4c7a" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.462672 4719 scope.go:117] "RemoveContainer" containerID="af04baa8a2435bb697336b3c13392ee4ef58f31364b2b8efe80341534314639d" Nov 24 09:51:07 crc kubenswrapper[4719]: E1124 09:51:07.463444 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af04baa8a2435bb697336b3c13392ee4ef58f31364b2b8efe80341534314639d\": container with ID starting with af04baa8a2435bb697336b3c13392ee4ef58f31364b2b8efe80341534314639d not found: ID does not exist" containerID="af04baa8a2435bb697336b3c13392ee4ef58f31364b2b8efe80341534314639d" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.463482 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af04baa8a2435bb697336b3c13392ee4ef58f31364b2b8efe80341534314639d"} err="failed to get container status \"af04baa8a2435bb697336b3c13392ee4ef58f31364b2b8efe80341534314639d\": rpc error: code = NotFound desc = could not find container \"af04baa8a2435bb697336b3c13392ee4ef58f31364b2b8efe80341534314639d\": container with ID starting with af04baa8a2435bb697336b3c13392ee4ef58f31364b2b8efe80341534314639d not found: ID does not exist" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.463521 4719 scope.go:117] "RemoveContainer" containerID="af5b371b4d1c18258f7985c67ce0f15d1ca638182514a34dcef2ee32df5d8795" Nov 24 09:51:07 crc kubenswrapper[4719]: E1124 09:51:07.463841 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5b371b4d1c18258f7985c67ce0f15d1ca638182514a34dcef2ee32df5d8795\": container with ID starting with af5b371b4d1c18258f7985c67ce0f15d1ca638182514a34dcef2ee32df5d8795 not found: ID does not exist" containerID="af5b371b4d1c18258f7985c67ce0f15d1ca638182514a34dcef2ee32df5d8795" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.463872 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5b371b4d1c18258f7985c67ce0f15d1ca638182514a34dcef2ee32df5d8795"} err="failed to get container status \"af5b371b4d1c18258f7985c67ce0f15d1ca638182514a34dcef2ee32df5d8795\": rpc error: code = NotFound desc = could not find container \"af5b371b4d1c18258f7985c67ce0f15d1ca638182514a34dcef2ee32df5d8795\": container with ID starting with af5b371b4d1c18258f7985c67ce0f15d1ca638182514a34dcef2ee32df5d8795 not found: ID does not exist" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.463890 4719 scope.go:117] "RemoveContainer" containerID="0628b4dc6981dff26790cb78291472498abd7adaf78423504086a69d0b3a4c7a" Nov 24 09:51:07 crc kubenswrapper[4719]: E1124 09:51:07.464338 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0628b4dc6981dff26790cb78291472498abd7adaf78423504086a69d0b3a4c7a\": container with ID starting with 0628b4dc6981dff26790cb78291472498abd7adaf78423504086a69d0b3a4c7a not found: ID does not exist" containerID="0628b4dc6981dff26790cb78291472498abd7adaf78423504086a69d0b3a4c7a" Nov 24 09:51:07 crc kubenswrapper[4719]: I1124 09:51:07.464368 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0628b4dc6981dff26790cb78291472498abd7adaf78423504086a69d0b3a4c7a"} err="failed to get container status \"0628b4dc6981dff26790cb78291472498abd7adaf78423504086a69d0b3a4c7a\": rpc error: code = NotFound desc = could not find container \"0628b4dc6981dff26790cb78291472498abd7adaf78423504086a69d0b3a4c7a\": container with ID starting with 0628b4dc6981dff26790cb78291472498abd7adaf78423504086a69d0b3a4c7a not found: ID does not exist" Nov 24 09:51:08 crc kubenswrapper[4719]: I1124 09:51:08.536451 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80004102-824c-4ee7-bc4f-f4def8fe810d" path="/var/lib/kubelet/pods/80004102-824c-4ee7-bc4f-f4def8fe810d/volumes" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.250480 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 09:51:28 crc kubenswrapper[4719]: E1124 09:51:28.251550 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80004102-824c-4ee7-bc4f-f4def8fe810d" containerName="extract-content" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.251566 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="80004102-824c-4ee7-bc4f-f4def8fe810d" containerName="extract-content" Nov 24 09:51:28 crc kubenswrapper[4719]: E1124 09:51:28.251584 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80004102-824c-4ee7-bc4f-f4def8fe810d" containerName="registry-server" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.251592 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="80004102-824c-4ee7-bc4f-f4def8fe810d" containerName="registry-server" Nov 24 09:51:28 crc kubenswrapper[4719]: E1124 09:51:28.251600 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80004102-824c-4ee7-bc4f-f4def8fe810d" containerName="extract-utilities" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.251606 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="80004102-824c-4ee7-bc4f-f4def8fe810d" containerName="extract-utilities" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.251808 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="80004102-824c-4ee7-bc4f-f4def8fe810d" containerName="registry-server" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.252571 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.256236 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.256490 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mmq4t" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.257280 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.258317 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.265106 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.307375 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c489706-83cc-4c99-9146-178f1efd5551-config-data\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.307441 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.307520 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c489706-83cc-4c99-9146-178f1efd5551-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.409675 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.409957 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.409988 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c489706-83cc-4c99-9146-178f1efd5551-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.410017 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.410096 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c489706-83cc-4c99-9146-178f1efd5551-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.410115 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-php2d\" (UniqueName: \"kubernetes.io/projected/9c489706-83cc-4c99-9146-178f1efd5551-kube-api-access-php2d\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.410131 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c489706-83cc-4c99-9146-178f1efd5551-config-data\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.410171 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.410192 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c489706-83cc-4c99-9146-178f1efd5551-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.411338 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c489706-83cc-4c99-9146-178f1efd5551-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.413429 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c489706-83cc-4c99-9146-178f1efd5551-config-data\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.417629 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.512089 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.512157 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.512227 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c489706-83cc-4c99-9146-178f1efd5551-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.512245 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-php2d\" (UniqueName: \"kubernetes.io/projected/9c489706-83cc-4c99-9146-178f1efd5551-kube-api-access-php2d\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.512307 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c489706-83cc-4c99-9146-178f1efd5551-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.512364 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.513279 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c489706-83cc-4c99-9146-178f1efd5551-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.513529 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.513726 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c489706-83cc-4c99-9146-178f1efd5551-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.516310 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.524782 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.534478 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-php2d\" (UniqueName: \"kubernetes.io/projected/9c489706-83cc-4c99-9146-178f1efd5551-kube-api-access-php2d\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.558700 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " pod="openstack/tempest-tests-tempest" Nov 24 09:51:28 crc kubenswrapper[4719]: I1124 09:51:28.603616 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 09:51:29 crc kubenswrapper[4719]: I1124 09:51:29.068146 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 09:51:29 crc kubenswrapper[4719]: I1124 09:51:29.514306 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9c489706-83cc-4c99-9146-178f1efd5551","Type":"ContainerStarted","Data":"885e7b9d6966b00e20e1f74140617a6099bb48893f70a5a6491a4e15f3a3a4e8"} Nov 24 09:51:34 crc kubenswrapper[4719]: I1124 09:51:34.561720 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:51:34 crc kubenswrapper[4719]: I1124 09:51:34.562300 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:52:04 crc kubenswrapper[4719]: I1124 09:52:04.562526 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:52:04 crc kubenswrapper[4719]: I1124 09:52:04.563134 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:52:04 crc kubenswrapper[4719]: I1124 09:52:04.563208 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 09:52:04 crc kubenswrapper[4719]: I1124 09:52:04.564625 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1707be58d034cb6de2f5073861b510fe6003dfd8c59a80ccb65a0b75b54f4094"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:52:04 crc kubenswrapper[4719]: I1124 09:52:04.564808 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://1707be58d034cb6de2f5073861b510fe6003dfd8c59a80ccb65a0b75b54f4094" gracePeriod=600 Nov 24 09:52:04 crc kubenswrapper[4719]: E1124 09:52:04.892160 4719 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe015f89_bb6b_4fa1_b687_192013956ed6.slice/crio-conmon-1707be58d034cb6de2f5073861b510fe6003dfd8c59a80ccb65a0b75b54f4094.scope\": RecentStats: unable to find data in memory cache]" Nov 24 09:52:04 crc kubenswrapper[4719]: I1124 09:52:04.896398 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="1707be58d034cb6de2f5073861b510fe6003dfd8c59a80ccb65a0b75b54f4094" exitCode=0 Nov 24 09:52:04 crc kubenswrapper[4719]: I1124 09:52:04.896469 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"1707be58d034cb6de2f5073861b510fe6003dfd8c59a80ccb65a0b75b54f4094"} Nov 24 09:52:04 crc kubenswrapper[4719]: I1124 09:52:04.896521 4719 scope.go:117] "RemoveContainer" containerID="35930a05564ab1979dce56f713d83afb960abc79ff9edb4ab71ec95fffe67e5e" Nov 24 09:52:10 crc kubenswrapper[4719]: E1124 09:52:10.668951 4719 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 24 09:52:10 crc kubenswrapper[4719]: E1124 09:52:10.673758 4719 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-php2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(9c489706-83cc-4c99-9146-178f1efd5551): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 09:52:10 crc kubenswrapper[4719]: E1124 09:52:10.675613 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="9c489706-83cc-4c99-9146-178f1efd5551" Nov 24 09:52:11 crc kubenswrapper[4719]: I1124 09:52:11.009986 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0"} Nov 24 09:52:11 crc kubenswrapper[4719]: E1124 09:52:11.011874 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="9c489706-83cc-4c99-9146-178f1efd5551" Nov 24 09:52:24 crc kubenswrapper[4719]: I1124 09:52:24.997092 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 09:52:26 crc kubenswrapper[4719]: I1124 09:52:26.149631 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9c489706-83cc-4c99-9146-178f1efd5551","Type":"ContainerStarted","Data":"77094dfdaf95315159cf86b077a4317e374e8a1af358532c60d704baeb0ca825"} Nov 24 09:52:26 crc kubenswrapper[4719]: I1124 09:52:26.168143 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.254384943 podStartE2EDuration="59.168127634s" podCreationTimestamp="2025-11-24 09:51:27 +0000 UTC" firstStartedPulling="2025-11-24 09:51:29.080654025 +0000 UTC m=+3465.411927277" lastFinishedPulling="2025-11-24 09:52:24.994396706 +0000 UTC m=+3521.325669968" observedRunningTime="2025-11-24 09:52:26.162989038 +0000 UTC m=+3522.494262290" watchObservedRunningTime="2025-11-24 09:52:26.168127634 +0000 UTC m=+3522.499400886" Nov 24 09:53:01 crc kubenswrapper[4719]: I1124 09:53:01.581477 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nnwvp"] Nov 24 09:53:01 crc kubenswrapper[4719]: I1124 09:53:01.586001 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:01 crc kubenswrapper[4719]: I1124 09:53:01.610847 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nnwvp"] Nov 24 09:53:01 crc kubenswrapper[4719]: I1124 09:53:01.689775 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef6cd69e-2074-42dd-87ad-88a33145b6c3-catalog-content\") pod \"community-operators-nnwvp\" (UID: \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\") " pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:01 crc kubenswrapper[4719]: I1124 09:53:01.689975 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef6cd69e-2074-42dd-87ad-88a33145b6c3-utilities\") pod \"community-operators-nnwvp\" (UID: \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\") " pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:01 crc kubenswrapper[4719]: I1124 09:53:01.690120 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mblnj\" (UniqueName: \"kubernetes.io/projected/ef6cd69e-2074-42dd-87ad-88a33145b6c3-kube-api-access-mblnj\") pod \"community-operators-nnwvp\" (UID: \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\") " pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:01 crc kubenswrapper[4719]: I1124 09:53:01.791969 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef6cd69e-2074-42dd-87ad-88a33145b6c3-utilities\") pod \"community-operators-nnwvp\" (UID: \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\") " pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:01 crc kubenswrapper[4719]: I1124 09:53:01.792118 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mblnj\" (UniqueName: \"kubernetes.io/projected/ef6cd69e-2074-42dd-87ad-88a33145b6c3-kube-api-access-mblnj\") pod \"community-operators-nnwvp\" (UID: \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\") " pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:01 crc kubenswrapper[4719]: I1124 09:53:01.792184 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef6cd69e-2074-42dd-87ad-88a33145b6c3-catalog-content\") pod \"community-operators-nnwvp\" (UID: \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\") " pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:01 crc kubenswrapper[4719]: I1124 09:53:01.792627 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef6cd69e-2074-42dd-87ad-88a33145b6c3-catalog-content\") pod \"community-operators-nnwvp\" (UID: \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\") " pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:01 crc kubenswrapper[4719]: I1124 09:53:01.792847 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef6cd69e-2074-42dd-87ad-88a33145b6c3-utilities\") pod \"community-operators-nnwvp\" (UID: \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\") " pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:01 crc kubenswrapper[4719]: I1124 09:53:01.813969 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mblnj\" (UniqueName: \"kubernetes.io/projected/ef6cd69e-2074-42dd-87ad-88a33145b6c3-kube-api-access-mblnj\") pod \"community-operators-nnwvp\" (UID: \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\") " pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:01 crc kubenswrapper[4719]: I1124 09:53:01.909707 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:02 crc kubenswrapper[4719]: I1124 09:53:02.504776 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nnwvp"] Nov 24 09:53:03 crc kubenswrapper[4719]: I1124 09:53:03.487401 4719 generic.go:334] "Generic (PLEG): container finished" podID="ef6cd69e-2074-42dd-87ad-88a33145b6c3" containerID="ab7ed6bb0c0db8191ff2dd6e5ab5678fabac5f8326250a5e314702f7aa2e18d9" exitCode=0 Nov 24 09:53:03 crc kubenswrapper[4719]: I1124 09:53:03.487465 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnwvp" event={"ID":"ef6cd69e-2074-42dd-87ad-88a33145b6c3","Type":"ContainerDied","Data":"ab7ed6bb0c0db8191ff2dd6e5ab5678fabac5f8326250a5e314702f7aa2e18d9"} Nov 24 09:53:03 crc kubenswrapper[4719]: I1124 09:53:03.488730 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnwvp" event={"ID":"ef6cd69e-2074-42dd-87ad-88a33145b6c3","Type":"ContainerStarted","Data":"2790b2720b9b2db20c393408fae59e9ce4a4f3d1270e8e0742ed867662e6d850"} Nov 24 09:53:05 crc kubenswrapper[4719]: I1124 09:53:05.508062 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnwvp" event={"ID":"ef6cd69e-2074-42dd-87ad-88a33145b6c3","Type":"ContainerStarted","Data":"8e868dfa4ed7ccec1cea7392811b4d84f08608c6a2f80c6672e7a6cd4fb939fc"} Nov 24 09:53:07 crc kubenswrapper[4719]: I1124 09:53:07.547139 4719 generic.go:334] "Generic (PLEG): container finished" podID="ef6cd69e-2074-42dd-87ad-88a33145b6c3" containerID="8e868dfa4ed7ccec1cea7392811b4d84f08608c6a2f80c6672e7a6cd4fb939fc" exitCode=0 Nov 24 09:53:07 crc kubenswrapper[4719]: I1124 09:53:07.547209 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnwvp" event={"ID":"ef6cd69e-2074-42dd-87ad-88a33145b6c3","Type":"ContainerDied","Data":"8e868dfa4ed7ccec1cea7392811b4d84f08608c6a2f80c6672e7a6cd4fb939fc"} Nov 24 09:53:08 crc kubenswrapper[4719]: I1124 09:53:08.557913 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnwvp" event={"ID":"ef6cd69e-2074-42dd-87ad-88a33145b6c3","Type":"ContainerStarted","Data":"66e947440ed00e0a5051a2b443677d2e4a2faee43fdfa7d80ebba3dfa328f152"} Nov 24 09:53:08 crc kubenswrapper[4719]: I1124 09:53:08.579977 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nnwvp" podStartSLOduration=3.107009349 podStartE2EDuration="7.579962438s" podCreationTimestamp="2025-11-24 09:53:01 +0000 UTC" firstStartedPulling="2025-11-24 09:53:03.489312688 +0000 UTC m=+3559.820585940" lastFinishedPulling="2025-11-24 09:53:07.962265767 +0000 UTC m=+3564.293539029" observedRunningTime="2025-11-24 09:53:08.579417093 +0000 UTC m=+3564.910690375" watchObservedRunningTime="2025-11-24 09:53:08.579962438 +0000 UTC m=+3564.911235690" Nov 24 09:53:11 crc kubenswrapper[4719]: I1124 09:53:11.911692 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:11 crc kubenswrapper[4719]: I1124 09:53:11.912001 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:12 crc kubenswrapper[4719]: I1124 09:53:12.959806 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nnwvp" podUID="ef6cd69e-2074-42dd-87ad-88a33145b6c3" containerName="registry-server" probeResult="failure" output=< Nov 24 09:53:12 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 09:53:12 crc kubenswrapper[4719]: > Nov 24 09:53:21 crc kubenswrapper[4719]: I1124 09:53:21.961515 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:22 crc kubenswrapper[4719]: I1124 09:53:22.012430 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:22 crc kubenswrapper[4719]: I1124 09:53:22.204290 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nnwvp"] Nov 24 09:53:23 crc kubenswrapper[4719]: I1124 09:53:23.692552 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nnwvp" podUID="ef6cd69e-2074-42dd-87ad-88a33145b6c3" containerName="registry-server" containerID="cri-o://66e947440ed00e0a5051a2b443677d2e4a2faee43fdfa7d80ebba3dfa328f152" gracePeriod=2 Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.331797 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.457708 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef6cd69e-2074-42dd-87ad-88a33145b6c3-catalog-content\") pod \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\" (UID: \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\") " Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.457831 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef6cd69e-2074-42dd-87ad-88a33145b6c3-utilities\") pod \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\" (UID: \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\") " Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.457890 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mblnj\" (UniqueName: \"kubernetes.io/projected/ef6cd69e-2074-42dd-87ad-88a33145b6c3-kube-api-access-mblnj\") pod \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\" (UID: \"ef6cd69e-2074-42dd-87ad-88a33145b6c3\") " Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.458683 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef6cd69e-2074-42dd-87ad-88a33145b6c3-utilities" (OuterVolumeSpecName: "utilities") pod "ef6cd69e-2074-42dd-87ad-88a33145b6c3" (UID: "ef6cd69e-2074-42dd-87ad-88a33145b6c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.469241 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6cd69e-2074-42dd-87ad-88a33145b6c3-kube-api-access-mblnj" (OuterVolumeSpecName: "kube-api-access-mblnj") pod "ef6cd69e-2074-42dd-87ad-88a33145b6c3" (UID: "ef6cd69e-2074-42dd-87ad-88a33145b6c3"). InnerVolumeSpecName "kube-api-access-mblnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.504597 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef6cd69e-2074-42dd-87ad-88a33145b6c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef6cd69e-2074-42dd-87ad-88a33145b6c3" (UID: "ef6cd69e-2074-42dd-87ad-88a33145b6c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.560938 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mblnj\" (UniqueName: \"kubernetes.io/projected/ef6cd69e-2074-42dd-87ad-88a33145b6c3-kube-api-access-mblnj\") on node \"crc\" DevicePath \"\"" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.560975 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef6cd69e-2074-42dd-87ad-88a33145b6c3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.560986 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef6cd69e-2074-42dd-87ad-88a33145b6c3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.703070 4719 generic.go:334] "Generic (PLEG): container finished" podID="ef6cd69e-2074-42dd-87ad-88a33145b6c3" containerID="66e947440ed00e0a5051a2b443677d2e4a2faee43fdfa7d80ebba3dfa328f152" exitCode=0 Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.703129 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnwvp" event={"ID":"ef6cd69e-2074-42dd-87ad-88a33145b6c3","Type":"ContainerDied","Data":"66e947440ed00e0a5051a2b443677d2e4a2faee43fdfa7d80ebba3dfa328f152"} Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.703202 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnwvp" event={"ID":"ef6cd69e-2074-42dd-87ad-88a33145b6c3","Type":"ContainerDied","Data":"2790b2720b9b2db20c393408fae59e9ce4a4f3d1270e8e0742ed867662e6d850"} Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.703145 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nnwvp" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.703228 4719 scope.go:117] "RemoveContainer" containerID="66e947440ed00e0a5051a2b443677d2e4a2faee43fdfa7d80ebba3dfa328f152" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.733747 4719 scope.go:117] "RemoveContainer" containerID="8e868dfa4ed7ccec1cea7392811b4d84f08608c6a2f80c6672e7a6cd4fb939fc" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.735463 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nnwvp"] Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.763256 4719 scope.go:117] "RemoveContainer" containerID="ab7ed6bb0c0db8191ff2dd6e5ab5678fabac5f8326250a5e314702f7aa2e18d9" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.773225 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nnwvp"] Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.812695 4719 scope.go:117] "RemoveContainer" containerID="66e947440ed00e0a5051a2b443677d2e4a2faee43fdfa7d80ebba3dfa328f152" Nov 24 09:53:24 crc kubenswrapper[4719]: E1124 09:53:24.813269 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66e947440ed00e0a5051a2b443677d2e4a2faee43fdfa7d80ebba3dfa328f152\": container with ID starting with 66e947440ed00e0a5051a2b443677d2e4a2faee43fdfa7d80ebba3dfa328f152 not found: ID does not exist" containerID="66e947440ed00e0a5051a2b443677d2e4a2faee43fdfa7d80ebba3dfa328f152" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.813322 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66e947440ed00e0a5051a2b443677d2e4a2faee43fdfa7d80ebba3dfa328f152"} err="failed to get container status \"66e947440ed00e0a5051a2b443677d2e4a2faee43fdfa7d80ebba3dfa328f152\": rpc error: code = NotFound desc = could not find container \"66e947440ed00e0a5051a2b443677d2e4a2faee43fdfa7d80ebba3dfa328f152\": container with ID starting with 66e947440ed00e0a5051a2b443677d2e4a2faee43fdfa7d80ebba3dfa328f152 not found: ID does not exist" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.813477 4719 scope.go:117] "RemoveContainer" containerID="8e868dfa4ed7ccec1cea7392811b4d84f08608c6a2f80c6672e7a6cd4fb939fc" Nov 24 09:53:24 crc kubenswrapper[4719]: E1124 09:53:24.813978 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e868dfa4ed7ccec1cea7392811b4d84f08608c6a2f80c6672e7a6cd4fb939fc\": container with ID starting with 8e868dfa4ed7ccec1cea7392811b4d84f08608c6a2f80c6672e7a6cd4fb939fc not found: ID does not exist" containerID="8e868dfa4ed7ccec1cea7392811b4d84f08608c6a2f80c6672e7a6cd4fb939fc" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.814004 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e868dfa4ed7ccec1cea7392811b4d84f08608c6a2f80c6672e7a6cd4fb939fc"} err="failed to get container status \"8e868dfa4ed7ccec1cea7392811b4d84f08608c6a2f80c6672e7a6cd4fb939fc\": rpc error: code = NotFound desc = could not find container \"8e868dfa4ed7ccec1cea7392811b4d84f08608c6a2f80c6672e7a6cd4fb939fc\": container with ID starting with 8e868dfa4ed7ccec1cea7392811b4d84f08608c6a2f80c6672e7a6cd4fb939fc not found: ID does not exist" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.814048 4719 scope.go:117] "RemoveContainer" containerID="ab7ed6bb0c0db8191ff2dd6e5ab5678fabac5f8326250a5e314702f7aa2e18d9" Nov 24 09:53:24 crc kubenswrapper[4719]: E1124 09:53:24.814406 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab7ed6bb0c0db8191ff2dd6e5ab5678fabac5f8326250a5e314702f7aa2e18d9\": container with ID starting with ab7ed6bb0c0db8191ff2dd6e5ab5678fabac5f8326250a5e314702f7aa2e18d9 not found: ID does not exist" containerID="ab7ed6bb0c0db8191ff2dd6e5ab5678fabac5f8326250a5e314702f7aa2e18d9" Nov 24 09:53:24 crc kubenswrapper[4719]: I1124 09:53:24.814451 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7ed6bb0c0db8191ff2dd6e5ab5678fabac5f8326250a5e314702f7aa2e18d9"} err="failed to get container status \"ab7ed6bb0c0db8191ff2dd6e5ab5678fabac5f8326250a5e314702f7aa2e18d9\": rpc error: code = NotFound desc = could not find container \"ab7ed6bb0c0db8191ff2dd6e5ab5678fabac5f8326250a5e314702f7aa2e18d9\": container with ID starting with ab7ed6bb0c0db8191ff2dd6e5ab5678fabac5f8326250a5e314702f7aa2e18d9 not found: ID does not exist" Nov 24 09:53:26 crc kubenswrapper[4719]: I1124 09:53:26.531846 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef6cd69e-2074-42dd-87ad-88a33145b6c3" path="/var/lib/kubelet/pods/ef6cd69e-2074-42dd-87ad-88a33145b6c3/volumes" Nov 24 09:54:12 crc kubenswrapper[4719]: I1124 09:54:12.508411 4719 scope.go:117] "RemoveContainer" containerID="f52e94cdbb283ee04dc8651e8114525801f47379e23293c15914887791892c4c" Nov 24 09:54:12 crc kubenswrapper[4719]: I1124 09:54:12.536401 4719 scope.go:117] "RemoveContainer" containerID="92cee3f0cf9d3457fe2ba5a2a21f73cd2c0002d11adaed7bfb3bf1ce97eea47f" Nov 24 09:54:12 crc kubenswrapper[4719]: I1124 09:54:12.699023 4719 scope.go:117] "RemoveContainer" containerID="be89360286ac743253c4d35f6acdf293ef676ed0e5e71c7a07f27c16c2470b29" Nov 24 09:54:12 crc kubenswrapper[4719]: I1124 09:54:12.878691 4719 scope.go:117] "RemoveContainer" containerID="32b40757ee333fd3df11398d7e533f2c533c1860a5d953055c40474c63446049" Nov 24 09:54:13 crc kubenswrapper[4719]: I1124 09:54:13.056783 4719 scope.go:117] "RemoveContainer" containerID="e2e4a59f1150967a88ca6e644745b064eaf21723b4a265e06249691e1bbc90c9" Nov 24 09:54:13 crc kubenswrapper[4719]: I1124 09:54:13.082275 4719 scope.go:117] "RemoveContainer" containerID="63cb288d575e5a7bfd4e98bf1b25910f8f6d7c8aee7964281c2876e89d964c26" Nov 24 09:54:25 crc kubenswrapper[4719]: I1124 09:54:25.815587 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zd8lp"] Nov 24 09:54:25 crc kubenswrapper[4719]: E1124 09:54:25.816729 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6cd69e-2074-42dd-87ad-88a33145b6c3" containerName="extract-utilities" Nov 24 09:54:25 crc kubenswrapper[4719]: I1124 09:54:25.816750 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6cd69e-2074-42dd-87ad-88a33145b6c3" containerName="extract-utilities" Nov 24 09:54:25 crc kubenswrapper[4719]: E1124 09:54:25.816772 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6cd69e-2074-42dd-87ad-88a33145b6c3" containerName="registry-server" Nov 24 09:54:25 crc kubenswrapper[4719]: I1124 09:54:25.816779 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6cd69e-2074-42dd-87ad-88a33145b6c3" containerName="registry-server" Nov 24 09:54:25 crc kubenswrapper[4719]: E1124 09:54:25.816794 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6cd69e-2074-42dd-87ad-88a33145b6c3" containerName="extract-content" Nov 24 09:54:25 crc kubenswrapper[4719]: I1124 09:54:25.816802 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6cd69e-2074-42dd-87ad-88a33145b6c3" containerName="extract-content" Nov 24 09:54:25 crc kubenswrapper[4719]: I1124 09:54:25.817019 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6cd69e-2074-42dd-87ad-88a33145b6c3" containerName="registry-server" Nov 24 09:54:25 crc kubenswrapper[4719]: I1124 09:54:25.818743 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:25 crc kubenswrapper[4719]: I1124 09:54:25.824562 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zd8lp"] Nov 24 09:54:26 crc kubenswrapper[4719]: I1124 09:54:26.009498 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw5tx\" (UniqueName: \"kubernetes.io/projected/1a8bf22b-7444-44c8-9972-cfc11d529d1f-kube-api-access-dw5tx\") pod \"redhat-marketplace-zd8lp\" (UID: \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\") " pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:26 crc kubenswrapper[4719]: I1124 09:54:26.009966 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a8bf22b-7444-44c8-9972-cfc11d529d1f-utilities\") pod \"redhat-marketplace-zd8lp\" (UID: \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\") " pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:26 crc kubenswrapper[4719]: I1124 09:54:26.010289 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a8bf22b-7444-44c8-9972-cfc11d529d1f-catalog-content\") pod \"redhat-marketplace-zd8lp\" (UID: \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\") " pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:26 crc kubenswrapper[4719]: I1124 09:54:26.111531 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a8bf22b-7444-44c8-9972-cfc11d529d1f-utilities\") pod \"redhat-marketplace-zd8lp\" (UID: \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\") " pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:26 crc kubenswrapper[4719]: I1124 09:54:26.111620 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a8bf22b-7444-44c8-9972-cfc11d529d1f-catalog-content\") pod \"redhat-marketplace-zd8lp\" (UID: \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\") " pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:26 crc kubenswrapper[4719]: I1124 09:54:26.111670 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw5tx\" (UniqueName: \"kubernetes.io/projected/1a8bf22b-7444-44c8-9972-cfc11d529d1f-kube-api-access-dw5tx\") pod \"redhat-marketplace-zd8lp\" (UID: \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\") " pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:26 crc kubenswrapper[4719]: I1124 09:54:26.112369 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a8bf22b-7444-44c8-9972-cfc11d529d1f-utilities\") pod \"redhat-marketplace-zd8lp\" (UID: \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\") " pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:26 crc kubenswrapper[4719]: I1124 09:54:26.112458 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a8bf22b-7444-44c8-9972-cfc11d529d1f-catalog-content\") pod \"redhat-marketplace-zd8lp\" (UID: \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\") " pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:26 crc kubenswrapper[4719]: I1124 09:54:26.134603 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw5tx\" (UniqueName: \"kubernetes.io/projected/1a8bf22b-7444-44c8-9972-cfc11d529d1f-kube-api-access-dw5tx\") pod \"redhat-marketplace-zd8lp\" (UID: \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\") " pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:26 crc kubenswrapper[4719]: I1124 09:54:26.138614 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:26 crc kubenswrapper[4719]: I1124 09:54:26.656035 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zd8lp"] Nov 24 09:54:27 crc kubenswrapper[4719]: I1124 09:54:27.330862 4719 generic.go:334] "Generic (PLEG): container finished" podID="1a8bf22b-7444-44c8-9972-cfc11d529d1f" containerID="13bda5880d0924df2ce418911e6e331dad3412a173337d495994d77d931765d4" exitCode=0 Nov 24 09:54:27 crc kubenswrapper[4719]: I1124 09:54:27.330929 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zd8lp" event={"ID":"1a8bf22b-7444-44c8-9972-cfc11d529d1f","Type":"ContainerDied","Data":"13bda5880d0924df2ce418911e6e331dad3412a173337d495994d77d931765d4"} Nov 24 09:54:27 crc kubenswrapper[4719]: I1124 09:54:27.331203 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zd8lp" event={"ID":"1a8bf22b-7444-44c8-9972-cfc11d529d1f","Type":"ContainerStarted","Data":"c4df599054242d23b525fbe3df3eb2c2af444564f09bdf1585f20743d72c81af"} Nov 24 09:54:27 crc kubenswrapper[4719]: I1124 09:54:27.334029 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 09:54:30 crc kubenswrapper[4719]: I1124 09:54:30.364730 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zd8lp" event={"ID":"1a8bf22b-7444-44c8-9972-cfc11d529d1f","Type":"ContainerStarted","Data":"44c64c9671b3c9b7a990a1e2ac370a7c7a75f08e2bedec1f0b26059d5b3b23fa"} Nov 24 09:54:31 crc kubenswrapper[4719]: I1124 09:54:31.374743 4719 generic.go:334] "Generic (PLEG): container finished" podID="1a8bf22b-7444-44c8-9972-cfc11d529d1f" containerID="44c64c9671b3c9b7a990a1e2ac370a7c7a75f08e2bedec1f0b26059d5b3b23fa" exitCode=0 Nov 24 09:54:31 crc kubenswrapper[4719]: I1124 09:54:31.374846 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zd8lp" event={"ID":"1a8bf22b-7444-44c8-9972-cfc11d529d1f","Type":"ContainerDied","Data":"44c64c9671b3c9b7a990a1e2ac370a7c7a75f08e2bedec1f0b26059d5b3b23fa"} Nov 24 09:54:33 crc kubenswrapper[4719]: I1124 09:54:33.398471 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zd8lp" event={"ID":"1a8bf22b-7444-44c8-9972-cfc11d529d1f","Type":"ContainerStarted","Data":"6ebdbde1bd04ca35ed53fcab7dc187e12bb8addec5023fe724889809dfe51df4"} Nov 24 09:54:33 crc kubenswrapper[4719]: I1124 09:54:33.417512 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zd8lp" podStartSLOduration=3.283849721 podStartE2EDuration="8.417491615s" podCreationTimestamp="2025-11-24 09:54:25 +0000 UTC" firstStartedPulling="2025-11-24 09:54:27.333782835 +0000 UTC m=+3643.665056087" lastFinishedPulling="2025-11-24 09:54:32.467424729 +0000 UTC m=+3648.798697981" observedRunningTime="2025-11-24 09:54:33.413348237 +0000 UTC m=+3649.744621519" watchObservedRunningTime="2025-11-24 09:54:33.417491615 +0000 UTC m=+3649.748764877" Nov 24 09:54:34 crc kubenswrapper[4719]: I1124 09:54:34.561666 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:54:34 crc kubenswrapper[4719]: I1124 09:54:34.561966 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:54:36 crc kubenswrapper[4719]: I1124 09:54:36.143390 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:36 crc kubenswrapper[4719]: I1124 09:54:36.143729 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:36 crc kubenswrapper[4719]: I1124 09:54:36.198582 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:46 crc kubenswrapper[4719]: I1124 09:54:46.206316 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:46 crc kubenswrapper[4719]: I1124 09:54:46.272700 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zd8lp"] Nov 24 09:54:46 crc kubenswrapper[4719]: I1124 09:54:46.510470 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zd8lp" podUID="1a8bf22b-7444-44c8-9972-cfc11d529d1f" containerName="registry-server" containerID="cri-o://6ebdbde1bd04ca35ed53fcab7dc187e12bb8addec5023fe724889809dfe51df4" gracePeriod=2 Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.014464 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.066368 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a8bf22b-7444-44c8-9972-cfc11d529d1f-catalog-content\") pod \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\" (UID: \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\") " Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.066474 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw5tx\" (UniqueName: \"kubernetes.io/projected/1a8bf22b-7444-44c8-9972-cfc11d529d1f-kube-api-access-dw5tx\") pod \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\" (UID: \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\") " Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.066571 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a8bf22b-7444-44c8-9972-cfc11d529d1f-utilities\") pod \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\" (UID: \"1a8bf22b-7444-44c8-9972-cfc11d529d1f\") " Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.067453 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a8bf22b-7444-44c8-9972-cfc11d529d1f-utilities" (OuterVolumeSpecName: "utilities") pod "1a8bf22b-7444-44c8-9972-cfc11d529d1f" (UID: "1a8bf22b-7444-44c8-9972-cfc11d529d1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.077192 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a8bf22b-7444-44c8-9972-cfc11d529d1f-kube-api-access-dw5tx" (OuterVolumeSpecName: "kube-api-access-dw5tx") pod "1a8bf22b-7444-44c8-9972-cfc11d529d1f" (UID: "1a8bf22b-7444-44c8-9972-cfc11d529d1f"). InnerVolumeSpecName "kube-api-access-dw5tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.099926 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a8bf22b-7444-44c8-9972-cfc11d529d1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a8bf22b-7444-44c8-9972-cfc11d529d1f" (UID: "1a8bf22b-7444-44c8-9972-cfc11d529d1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.169215 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a8bf22b-7444-44c8-9972-cfc11d529d1f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.169264 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw5tx\" (UniqueName: \"kubernetes.io/projected/1a8bf22b-7444-44c8-9972-cfc11d529d1f-kube-api-access-dw5tx\") on node \"crc\" DevicePath \"\"" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.169279 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a8bf22b-7444-44c8-9972-cfc11d529d1f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.521240 4719 generic.go:334] "Generic (PLEG): container finished" podID="1a8bf22b-7444-44c8-9972-cfc11d529d1f" containerID="6ebdbde1bd04ca35ed53fcab7dc187e12bb8addec5023fe724889809dfe51df4" exitCode=0 Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.521278 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zd8lp" event={"ID":"1a8bf22b-7444-44c8-9972-cfc11d529d1f","Type":"ContainerDied","Data":"6ebdbde1bd04ca35ed53fcab7dc187e12bb8addec5023fe724889809dfe51df4"} Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.521303 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zd8lp" event={"ID":"1a8bf22b-7444-44c8-9972-cfc11d529d1f","Type":"ContainerDied","Data":"c4df599054242d23b525fbe3df3eb2c2af444564f09bdf1585f20743d72c81af"} Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.521305 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zd8lp" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.521322 4719 scope.go:117] "RemoveContainer" containerID="6ebdbde1bd04ca35ed53fcab7dc187e12bb8addec5023fe724889809dfe51df4" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.568471 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zd8lp"] Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.572866 4719 scope.go:117] "RemoveContainer" containerID="44c64c9671b3c9b7a990a1e2ac370a7c7a75f08e2bedec1f0b26059d5b3b23fa" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.581099 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zd8lp"] Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.596765 4719 scope.go:117] "RemoveContainer" containerID="13bda5880d0924df2ce418911e6e331dad3412a173337d495994d77d931765d4" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.659013 4719 scope.go:117] "RemoveContainer" containerID="6ebdbde1bd04ca35ed53fcab7dc187e12bb8addec5023fe724889809dfe51df4" Nov 24 09:54:47 crc kubenswrapper[4719]: E1124 09:54:47.659681 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ebdbde1bd04ca35ed53fcab7dc187e12bb8addec5023fe724889809dfe51df4\": container with ID starting with 6ebdbde1bd04ca35ed53fcab7dc187e12bb8addec5023fe724889809dfe51df4 not found: ID does not exist" containerID="6ebdbde1bd04ca35ed53fcab7dc187e12bb8addec5023fe724889809dfe51df4" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.659726 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebdbde1bd04ca35ed53fcab7dc187e12bb8addec5023fe724889809dfe51df4"} err="failed to get container status \"6ebdbde1bd04ca35ed53fcab7dc187e12bb8addec5023fe724889809dfe51df4\": rpc error: code = NotFound desc = could not find container \"6ebdbde1bd04ca35ed53fcab7dc187e12bb8addec5023fe724889809dfe51df4\": container with ID starting with 6ebdbde1bd04ca35ed53fcab7dc187e12bb8addec5023fe724889809dfe51df4 not found: ID does not exist" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.659755 4719 scope.go:117] "RemoveContainer" containerID="44c64c9671b3c9b7a990a1e2ac370a7c7a75f08e2bedec1f0b26059d5b3b23fa" Nov 24 09:54:47 crc kubenswrapper[4719]: E1124 09:54:47.660409 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44c64c9671b3c9b7a990a1e2ac370a7c7a75f08e2bedec1f0b26059d5b3b23fa\": container with ID starting with 44c64c9671b3c9b7a990a1e2ac370a7c7a75f08e2bedec1f0b26059d5b3b23fa not found: ID does not exist" containerID="44c64c9671b3c9b7a990a1e2ac370a7c7a75f08e2bedec1f0b26059d5b3b23fa" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.660451 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c64c9671b3c9b7a990a1e2ac370a7c7a75f08e2bedec1f0b26059d5b3b23fa"} err="failed to get container status \"44c64c9671b3c9b7a990a1e2ac370a7c7a75f08e2bedec1f0b26059d5b3b23fa\": rpc error: code = NotFound desc = could not find container \"44c64c9671b3c9b7a990a1e2ac370a7c7a75f08e2bedec1f0b26059d5b3b23fa\": container with ID starting with 44c64c9671b3c9b7a990a1e2ac370a7c7a75f08e2bedec1f0b26059d5b3b23fa not found: ID does not exist" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.660477 4719 scope.go:117] "RemoveContainer" containerID="13bda5880d0924df2ce418911e6e331dad3412a173337d495994d77d931765d4" Nov 24 09:54:47 crc kubenswrapper[4719]: E1124 09:54:47.660905 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13bda5880d0924df2ce418911e6e331dad3412a173337d495994d77d931765d4\": container with ID starting with 13bda5880d0924df2ce418911e6e331dad3412a173337d495994d77d931765d4 not found: ID does not exist" containerID="13bda5880d0924df2ce418911e6e331dad3412a173337d495994d77d931765d4" Nov 24 09:54:47 crc kubenswrapper[4719]: I1124 09:54:47.661138 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13bda5880d0924df2ce418911e6e331dad3412a173337d495994d77d931765d4"} err="failed to get container status \"13bda5880d0924df2ce418911e6e331dad3412a173337d495994d77d931765d4\": rpc error: code = NotFound desc = could not find container \"13bda5880d0924df2ce418911e6e331dad3412a173337d495994d77d931765d4\": container with ID starting with 13bda5880d0924df2ce418911e6e331dad3412a173337d495994d77d931765d4 not found: ID does not exist" Nov 24 09:54:48 crc kubenswrapper[4719]: I1124 09:54:48.531926 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a8bf22b-7444-44c8-9972-cfc11d529d1f" path="/var/lib/kubelet/pods/1a8bf22b-7444-44c8-9972-cfc11d529d1f/volumes" Nov 24 09:55:04 crc kubenswrapper[4719]: I1124 09:55:04.563134 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:55:04 crc kubenswrapper[4719]: I1124 09:55:04.563600 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:55:34 crc kubenswrapper[4719]: I1124 09:55:34.562148 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 09:55:34 crc kubenswrapper[4719]: I1124 09:55:34.563173 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 09:55:34 crc kubenswrapper[4719]: I1124 09:55:34.563238 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 09:55:34 crc kubenswrapper[4719]: I1124 09:55:34.564258 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 09:55:34 crc kubenswrapper[4719]: I1124 09:55:34.564340 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" gracePeriod=600 Nov 24 09:55:34 crc kubenswrapper[4719]: E1124 09:55:34.708021 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:55:34 crc kubenswrapper[4719]: I1124 09:55:34.941785 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" exitCode=0 Nov 24 09:55:34 crc kubenswrapper[4719]: I1124 09:55:34.941846 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0"} Nov 24 09:55:34 crc kubenswrapper[4719]: I1124 09:55:34.942295 4719 scope.go:117] "RemoveContainer" containerID="1707be58d034cb6de2f5073861b510fe6003dfd8c59a80ccb65a0b75b54f4094" Nov 24 09:55:34 crc kubenswrapper[4719]: I1124 09:55:34.943320 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:55:34 crc kubenswrapper[4719]: E1124 09:55:34.943746 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:55:47 crc kubenswrapper[4719]: I1124 09:55:47.520631 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:55:47 crc kubenswrapper[4719]: E1124 09:55:47.521547 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:56:02 crc kubenswrapper[4719]: I1124 09:56:02.520858 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:56:02 crc kubenswrapper[4719]: E1124 09:56:02.521983 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:56:13 crc kubenswrapper[4719]: I1124 09:56:13.203809 4719 scope.go:117] "RemoveContainer" containerID="3f0e5189567e7fb0c7567813b3f096b5d299316c30cd250f9a7c0ed70440a7db" Nov 24 09:56:13 crc kubenswrapper[4719]: I1124 09:56:13.226238 4719 scope.go:117] "RemoveContainer" containerID="73ed274a892e64ab5b517b6d026a6a25956bab591773753f15bc96828b53db60" Nov 24 09:56:15 crc kubenswrapper[4719]: I1124 09:56:15.521091 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:56:15 crc kubenswrapper[4719]: E1124 09:56:15.521837 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:56:29 crc kubenswrapper[4719]: I1124 09:56:29.521528 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:56:29 crc kubenswrapper[4719]: E1124 09:56:29.522245 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:56:42 crc kubenswrapper[4719]: I1124 09:56:42.520914 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:56:42 crc kubenswrapper[4719]: E1124 09:56:42.521656 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:56:54 crc kubenswrapper[4719]: I1124 09:56:54.534425 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:56:54 crc kubenswrapper[4719]: E1124 09:56:54.535240 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:57:09 crc kubenswrapper[4719]: I1124 09:57:09.520696 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:57:09 crc kubenswrapper[4719]: E1124 09:57:09.521481 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:57:21 crc kubenswrapper[4719]: I1124 09:57:21.521200 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:57:21 crc kubenswrapper[4719]: E1124 09:57:21.521972 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:57:34 crc kubenswrapper[4719]: I1124 09:57:34.067901 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-5xnf7"] Nov 24 09:57:34 crc kubenswrapper[4719]: I1124 09:57:34.077210 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-5xnf7"] Nov 24 09:57:34 crc kubenswrapper[4719]: I1124 09:57:34.533985 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fef8c035-164f-4eab-9e45-70e0bdd48b10" path="/var/lib/kubelet/pods/fef8c035-164f-4eab-9e45-70e0bdd48b10/volumes" Nov 24 09:57:35 crc kubenswrapper[4719]: I1124 09:57:35.039593 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-ae80-account-create-m6k9l"] Nov 24 09:57:35 crc kubenswrapper[4719]: I1124 09:57:35.054506 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-ae80-account-create-m6k9l"] Nov 24 09:57:35 crc kubenswrapper[4719]: I1124 09:57:35.520988 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:57:35 crc kubenswrapper[4719]: E1124 09:57:35.521315 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:57:36 crc kubenswrapper[4719]: I1124 09:57:36.532824 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5144bf8-a3e7-4c00-aca4-c9d0e02bf441" path="/var/lib/kubelet/pods/e5144bf8-a3e7-4c00-aca4-c9d0e02bf441/volumes" Nov 24 09:57:47 crc kubenswrapper[4719]: I1124 09:57:47.520632 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:57:47 crc kubenswrapper[4719]: E1124 09:57:47.521480 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:57:59 crc kubenswrapper[4719]: I1124 09:57:59.521440 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:57:59 crc kubenswrapper[4719]: E1124 09:57:59.522165 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:58:01 crc kubenswrapper[4719]: I1124 09:58:01.399649 4719 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 09:58:01 crc kubenswrapper[4719]: I1124 09:58:01.399940 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 09:58:12 crc kubenswrapper[4719]: I1124 09:58:12.520506 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:58:12 crc kubenswrapper[4719]: E1124 09:58:12.521298 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:58:13 crc kubenswrapper[4719]: I1124 09:58:13.283900 4719 scope.go:117] "RemoveContainer" containerID="e708192b8302e957628273cc52bff9da6b4101b1e6e1e796fdf9a9b5fe3539c5" Nov 24 09:58:13 crc kubenswrapper[4719]: I1124 09:58:13.329140 4719 scope.go:117] "RemoveContainer" containerID="8881051cc5fc59ebd5f5c1ac7cd147c280c35eb14a3096cee38b57826fac4c57" Nov 24 09:58:24 crc kubenswrapper[4719]: I1124 09:58:24.529738 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:58:24 crc kubenswrapper[4719]: E1124 09:58:24.530896 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:58:35 crc kubenswrapper[4719]: I1124 09:58:35.521889 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:58:35 crc kubenswrapper[4719]: E1124 09:58:35.523640 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:58:47 crc kubenswrapper[4719]: I1124 09:58:47.521352 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:58:47 crc kubenswrapper[4719]: E1124 09:58:47.522095 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:58:58 crc kubenswrapper[4719]: I1124 09:58:58.520596 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:58:58 crc kubenswrapper[4719]: E1124 09:58:58.521332 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:59:12 crc kubenswrapper[4719]: I1124 09:59:12.525346 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:59:12 crc kubenswrapper[4719]: E1124 09:59:12.527649 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:59:27 crc kubenswrapper[4719]: I1124 09:59:27.520878 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:59:27 crc kubenswrapper[4719]: E1124 09:59:27.521774 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:59:39 crc kubenswrapper[4719]: I1124 09:59:39.061861 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-nv6wn"] Nov 24 09:59:39 crc kubenswrapper[4719]: I1124 09:59:39.070079 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-nv6wn"] Nov 24 09:59:40 crc kubenswrapper[4719]: I1124 09:59:40.521777 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:59:40 crc kubenswrapper[4719]: E1124 09:59:40.522422 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 09:59:40 crc kubenswrapper[4719]: I1124 09:59:40.533553 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e8128a6-20cf-4abd-a677-fc1d0f61fd23" path="/var/lib/kubelet/pods/1e8128a6-20cf-4abd-a677-fc1d0f61fd23/volumes" Nov 24 09:59:55 crc kubenswrapper[4719]: I1124 09:59:55.521774 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 09:59:55 crc kubenswrapper[4719]: E1124 09:59:55.522412 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.107340 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk"] Nov 24 10:00:01 crc kubenswrapper[4719]: E1124 10:00:01.109241 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a8bf22b-7444-44c8-9972-cfc11d529d1f" containerName="registry-server" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.109448 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a8bf22b-7444-44c8-9972-cfc11d529d1f" containerName="registry-server" Nov 24 10:00:01 crc kubenswrapper[4719]: E1124 10:00:01.109539 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a8bf22b-7444-44c8-9972-cfc11d529d1f" containerName="extract-utilities" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.109594 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a8bf22b-7444-44c8-9972-cfc11d529d1f" containerName="extract-utilities" Nov 24 10:00:01 crc kubenswrapper[4719]: E1124 10:00:01.109668 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a8bf22b-7444-44c8-9972-cfc11d529d1f" containerName="extract-content" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.109723 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a8bf22b-7444-44c8-9972-cfc11d529d1f" containerName="extract-content" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.109952 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a8bf22b-7444-44c8-9972-cfc11d529d1f" containerName="registry-server" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.115099 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.121619 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.121783 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.124356 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk"] Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.239016 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwsrg\" (UniqueName: \"kubernetes.io/projected/4b364782-7bef-4e52-9526-f42ef1376166-kube-api-access-lwsrg\") pod \"collect-profiles-29399640-kdxmk\" (UID: \"4b364782-7bef-4e52-9526-f42ef1376166\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.240542 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b364782-7bef-4e52-9526-f42ef1376166-secret-volume\") pod \"collect-profiles-29399640-kdxmk\" (UID: \"4b364782-7bef-4e52-9526-f42ef1376166\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.240668 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b364782-7bef-4e52-9526-f42ef1376166-config-volume\") pod \"collect-profiles-29399640-kdxmk\" (UID: \"4b364782-7bef-4e52-9526-f42ef1376166\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.343517 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwsrg\" (UniqueName: \"kubernetes.io/projected/4b364782-7bef-4e52-9526-f42ef1376166-kube-api-access-lwsrg\") pod \"collect-profiles-29399640-kdxmk\" (UID: \"4b364782-7bef-4e52-9526-f42ef1376166\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.344854 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b364782-7bef-4e52-9526-f42ef1376166-secret-volume\") pod \"collect-profiles-29399640-kdxmk\" (UID: \"4b364782-7bef-4e52-9526-f42ef1376166\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.346157 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b364782-7bef-4e52-9526-f42ef1376166-config-volume\") pod \"collect-profiles-29399640-kdxmk\" (UID: \"4b364782-7bef-4e52-9526-f42ef1376166\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.346974 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b364782-7bef-4e52-9526-f42ef1376166-config-volume\") pod \"collect-profiles-29399640-kdxmk\" (UID: \"4b364782-7bef-4e52-9526-f42ef1376166\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.350426 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b364782-7bef-4e52-9526-f42ef1376166-secret-volume\") pod \"collect-profiles-29399640-kdxmk\" (UID: \"4b364782-7bef-4e52-9526-f42ef1376166\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.359699 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwsrg\" (UniqueName: \"kubernetes.io/projected/4b364782-7bef-4e52-9526-f42ef1376166-kube-api-access-lwsrg\") pod \"collect-profiles-29399640-kdxmk\" (UID: \"4b364782-7bef-4e52-9526-f42ef1376166\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" Nov 24 10:00:01 crc kubenswrapper[4719]: I1124 10:00:01.459639 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" Nov 24 10:00:02 crc kubenswrapper[4719]: I1124 10:00:02.250363 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk"] Nov 24 10:00:03 crc kubenswrapper[4719]: I1124 10:00:03.113339 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" event={"ID":"4b364782-7bef-4e52-9526-f42ef1376166","Type":"ContainerStarted","Data":"b39fc1d25eae6fea085bfe08b2f7a28efccdbe46b349fb4b632387f8cc3a831c"} Nov 24 10:00:03 crc kubenswrapper[4719]: I1124 10:00:03.113631 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" event={"ID":"4b364782-7bef-4e52-9526-f42ef1376166","Type":"ContainerStarted","Data":"7d27413bfdf8e17e7dd35e091932393654830275b97eeabb9aa561ba824d1f52"} Nov 24 10:00:03 crc kubenswrapper[4719]: I1124 10:00:03.133944 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" podStartSLOduration=2.133925001 podStartE2EDuration="2.133925001s" podCreationTimestamp="2025-11-24 10:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 10:00:03.127023624 +0000 UTC m=+3979.458296876" watchObservedRunningTime="2025-11-24 10:00:03.133925001 +0000 UTC m=+3979.465198253" Nov 24 10:00:04 crc kubenswrapper[4719]: I1124 10:00:04.122201 4719 generic.go:334] "Generic (PLEG): container finished" podID="4b364782-7bef-4e52-9526-f42ef1376166" containerID="b39fc1d25eae6fea085bfe08b2f7a28efccdbe46b349fb4b632387f8cc3a831c" exitCode=0 Nov 24 10:00:04 crc kubenswrapper[4719]: I1124 10:00:04.122540 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" event={"ID":"4b364782-7bef-4e52-9526-f42ef1376166","Type":"ContainerDied","Data":"b39fc1d25eae6fea085bfe08b2f7a28efccdbe46b349fb4b632387f8cc3a831c"} Nov 24 10:00:05 crc kubenswrapper[4719]: I1124 10:00:05.487084 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" Nov 24 10:00:05 crc kubenswrapper[4719]: I1124 10:00:05.625903 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b364782-7bef-4e52-9526-f42ef1376166-config-volume\") pod \"4b364782-7bef-4e52-9526-f42ef1376166\" (UID: \"4b364782-7bef-4e52-9526-f42ef1376166\") " Nov 24 10:00:05 crc kubenswrapper[4719]: I1124 10:00:05.625991 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b364782-7bef-4e52-9526-f42ef1376166-secret-volume\") pod \"4b364782-7bef-4e52-9526-f42ef1376166\" (UID: \"4b364782-7bef-4e52-9526-f42ef1376166\") " Nov 24 10:00:05 crc kubenswrapper[4719]: I1124 10:00:05.626466 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwsrg\" (UniqueName: \"kubernetes.io/projected/4b364782-7bef-4e52-9526-f42ef1376166-kube-api-access-lwsrg\") pod \"4b364782-7bef-4e52-9526-f42ef1376166\" (UID: \"4b364782-7bef-4e52-9526-f42ef1376166\") " Nov 24 10:00:05 crc kubenswrapper[4719]: I1124 10:00:05.626805 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b364782-7bef-4e52-9526-f42ef1376166-config-volume" (OuterVolumeSpecName: "config-volume") pod "4b364782-7bef-4e52-9526-f42ef1376166" (UID: "4b364782-7bef-4e52-9526-f42ef1376166"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 10:00:05 crc kubenswrapper[4719]: I1124 10:00:05.627290 4719 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b364782-7bef-4e52-9526-f42ef1376166-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 10:00:05 crc kubenswrapper[4719]: I1124 10:00:05.631696 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b364782-7bef-4e52-9526-f42ef1376166-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4b364782-7bef-4e52-9526-f42ef1376166" (UID: "4b364782-7bef-4e52-9526-f42ef1376166"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 10:00:05 crc kubenswrapper[4719]: I1124 10:00:05.631721 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b364782-7bef-4e52-9526-f42ef1376166-kube-api-access-lwsrg" (OuterVolumeSpecName: "kube-api-access-lwsrg") pod "4b364782-7bef-4e52-9526-f42ef1376166" (UID: "4b364782-7bef-4e52-9526-f42ef1376166"). InnerVolumeSpecName "kube-api-access-lwsrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:00:05 crc kubenswrapper[4719]: I1124 10:00:05.728691 4719 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b364782-7bef-4e52-9526-f42ef1376166-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 10:00:05 crc kubenswrapper[4719]: I1124 10:00:05.728906 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwsrg\" (UniqueName: \"kubernetes.io/projected/4b364782-7bef-4e52-9526-f42ef1376166-kube-api-access-lwsrg\") on node \"crc\" DevicePath \"\"" Nov 24 10:00:06 crc kubenswrapper[4719]: I1124 10:00:06.137797 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" event={"ID":"4b364782-7bef-4e52-9526-f42ef1376166","Type":"ContainerDied","Data":"7d27413bfdf8e17e7dd35e091932393654830275b97eeabb9aa561ba824d1f52"} Nov 24 10:00:06 crc kubenswrapper[4719]: I1124 10:00:06.138068 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d27413bfdf8e17e7dd35e091932393654830275b97eeabb9aa561ba824d1f52" Nov 24 10:00:06 crc kubenswrapper[4719]: I1124 10:00:06.137834 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399640-kdxmk" Nov 24 10:00:06 crc kubenswrapper[4719]: I1124 10:00:06.580830 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48"] Nov 24 10:00:06 crc kubenswrapper[4719]: I1124 10:00:06.588363 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399595-k2g48"] Nov 24 10:00:08 crc kubenswrapper[4719]: I1124 10:00:08.521604 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 10:00:08 crc kubenswrapper[4719]: E1124 10:00:08.522584 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:00:08 crc kubenswrapper[4719]: I1124 10:00:08.541600 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5997f68-a992-410f-839f-80a8fac64cb1" path="/var/lib/kubelet/pods/f5997f68-a992-410f-839f-80a8fac64cb1/volumes" Nov 24 10:00:13 crc kubenswrapper[4719]: I1124 10:00:13.416521 4719 scope.go:117] "RemoveContainer" containerID="306cd7df2dd46e45722c7f6c6ddde4d023189804166a5f0d10db2ed3f923896d" Nov 24 10:00:13 crc kubenswrapper[4719]: I1124 10:00:13.446513 4719 scope.go:117] "RemoveContainer" containerID="1fa600130321fbab71ba96891333dbb1beffc7363f8f3684bc285a17baf6ed45" Nov 24 10:00:22 crc kubenswrapper[4719]: I1124 10:00:22.520914 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 10:00:22 crc kubenswrapper[4719]: E1124 10:00:22.522534 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:00:36 crc kubenswrapper[4719]: I1124 10:00:36.521676 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 10:00:37 crc kubenswrapper[4719]: I1124 10:00:37.437602 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"22a40b292aa2c73b1bc4ad790f908be2ce33655290a1fd793eac90657829c15d"} Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.604852 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pf6xd"] Nov 24 10:00:51 crc kubenswrapper[4719]: E1124 10:00:51.605850 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b364782-7bef-4e52-9526-f42ef1376166" containerName="collect-profiles" Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.605871 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b364782-7bef-4e52-9526-f42ef1376166" containerName="collect-profiles" Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.606343 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b364782-7bef-4e52-9526-f42ef1376166" containerName="collect-profiles" Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.608115 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.622547 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pf6xd"] Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.664761 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwfhj\" (UniqueName: \"kubernetes.io/projected/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-kube-api-access-bwfhj\") pod \"redhat-operators-pf6xd\" (UID: \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\") " pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.665028 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-catalog-content\") pod \"redhat-operators-pf6xd\" (UID: \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\") " pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.665104 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-utilities\") pod \"redhat-operators-pf6xd\" (UID: \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\") " pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.767827 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwfhj\" (UniqueName: \"kubernetes.io/projected/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-kube-api-access-bwfhj\") pod \"redhat-operators-pf6xd\" (UID: \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\") " pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.767924 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-catalog-content\") pod \"redhat-operators-pf6xd\" (UID: \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\") " pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.767949 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-utilities\") pod \"redhat-operators-pf6xd\" (UID: \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\") " pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.768562 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-utilities\") pod \"redhat-operators-pf6xd\" (UID: \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\") " pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.768717 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-catalog-content\") pod \"redhat-operators-pf6xd\" (UID: \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\") " pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.789027 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwfhj\" (UniqueName: \"kubernetes.io/projected/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-kube-api-access-bwfhj\") pod \"redhat-operators-pf6xd\" (UID: \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\") " pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:00:51 crc kubenswrapper[4719]: I1124 10:00:51.933823 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:00:52 crc kubenswrapper[4719]: I1124 10:00:52.558132 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pf6xd"] Nov 24 10:00:52 crc kubenswrapper[4719]: I1124 10:00:52.559563 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pf6xd" event={"ID":"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb","Type":"ContainerStarted","Data":"0b21dff930881c44357c465f27a723c160f7a4326d150644efc05d8b2451cb5f"} Nov 24 10:00:53 crc kubenswrapper[4719]: I1124 10:00:53.569978 4719 generic.go:334] "Generic (PLEG): container finished" podID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" containerID="c9d1c69e4a13cc1b996059699ad9a45a1fd66e81b1665c7c6b26a53ed41cbfb3" exitCode=0 Nov 24 10:00:53 crc kubenswrapper[4719]: I1124 10:00:53.570051 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pf6xd" event={"ID":"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb","Type":"ContainerDied","Data":"c9d1c69e4a13cc1b996059699ad9a45a1fd66e81b1665c7c6b26a53ed41cbfb3"} Nov 24 10:00:53 crc kubenswrapper[4719]: I1124 10:00:53.572518 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 10:00:54 crc kubenswrapper[4719]: I1124 10:00:54.582098 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pf6xd" event={"ID":"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb","Type":"ContainerStarted","Data":"70edf73ba2d30fa0790f42a55dea54b9933084c4a62bb3679017b27a95cbe7df"} Nov 24 10:00:59 crc kubenswrapper[4719]: I1124 10:00:59.633009 4719 generic.go:334] "Generic (PLEG): container finished" podID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" containerID="70edf73ba2d30fa0790f42a55dea54b9933084c4a62bb3679017b27a95cbe7df" exitCode=0 Nov 24 10:00:59 crc kubenswrapper[4719]: I1124 10:00:59.633105 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pf6xd" event={"ID":"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb","Type":"ContainerDied","Data":"70edf73ba2d30fa0790f42a55dea54b9933084c4a62bb3679017b27a95cbe7df"} Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.164797 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29399641-rwbnf"] Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.168515 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.188408 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399641-rwbnf"] Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.349666 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cjld\" (UniqueName: \"kubernetes.io/projected/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-kube-api-access-6cjld\") pod \"keystone-cron-29399641-rwbnf\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.349740 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-fernet-keys\") pod \"keystone-cron-29399641-rwbnf\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.349779 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-combined-ca-bundle\") pod \"keystone-cron-29399641-rwbnf\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.349865 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-config-data\") pod \"keystone-cron-29399641-rwbnf\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.452364 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-combined-ca-bundle\") pod \"keystone-cron-29399641-rwbnf\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.452482 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-config-data\") pod \"keystone-cron-29399641-rwbnf\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.452574 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cjld\" (UniqueName: \"kubernetes.io/projected/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-kube-api-access-6cjld\") pod \"keystone-cron-29399641-rwbnf\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.452663 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-fernet-keys\") pod \"keystone-cron-29399641-rwbnf\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.461966 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-combined-ca-bundle\") pod \"keystone-cron-29399641-rwbnf\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.462055 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-fernet-keys\") pod \"keystone-cron-29399641-rwbnf\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.462116 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-config-data\") pod \"keystone-cron-29399641-rwbnf\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.477843 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cjld\" (UniqueName: \"kubernetes.io/projected/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-kube-api-access-6cjld\") pod \"keystone-cron-29399641-rwbnf\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.493600 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.660505 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pf6xd" event={"ID":"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb","Type":"ContainerStarted","Data":"fdab7af16d14901f1826d578e643001b140345fad9f3f3eadd265fdb6eab1d63"} Nov 24 10:01:00 crc kubenswrapper[4719]: I1124 10:01:00.723337 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pf6xd" podStartSLOduration=3.252253981 podStartE2EDuration="9.723315242s" podCreationTimestamp="2025-11-24 10:00:51 +0000 UTC" firstStartedPulling="2025-11-24 10:00:53.572231236 +0000 UTC m=+4029.903504508" lastFinishedPulling="2025-11-24 10:01:00.043292517 +0000 UTC m=+4036.374565769" observedRunningTime="2025-11-24 10:01:00.703690865 +0000 UTC m=+4037.034964107" watchObservedRunningTime="2025-11-24 10:01:00.723315242 +0000 UTC m=+4037.054588494" Nov 24 10:01:01 crc kubenswrapper[4719]: I1124 10:01:01.269862 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29399641-rwbnf"] Nov 24 10:01:01 crc kubenswrapper[4719]: I1124 10:01:01.670438 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399641-rwbnf" event={"ID":"6ff61f4c-fc69-4299-987e-1c9ca3e1c633","Type":"ContainerStarted","Data":"7bdb0eb7e40e504045fe8b5a7bf60cad024d118e7981b149f473bbe2a48d9603"} Nov 24 10:01:01 crc kubenswrapper[4719]: I1124 10:01:01.670709 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399641-rwbnf" event={"ID":"6ff61f4c-fc69-4299-987e-1c9ca3e1c633","Type":"ContainerStarted","Data":"6207e0ae01a573c024f90a4d0721d83efd27dc3b2575ef05d1296dacf32f299a"} Nov 24 10:01:01 crc kubenswrapper[4719]: I1124 10:01:01.691135 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29399641-rwbnf" podStartSLOduration=1.6911090309999999 podStartE2EDuration="1.691109031s" podCreationTimestamp="2025-11-24 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 10:01:01.686448709 +0000 UTC m=+4038.017721971" watchObservedRunningTime="2025-11-24 10:01:01.691109031 +0000 UTC m=+4038.022382283" Nov 24 10:01:01 crc kubenswrapper[4719]: I1124 10:01:01.934711 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:01:01 crc kubenswrapper[4719]: I1124 10:01:01.934757 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:01:03 crc kubenswrapper[4719]: I1124 10:01:03.019524 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pf6xd" podUID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" containerName="registry-server" probeResult="failure" output=< Nov 24 10:01:03 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 10:01:03 crc kubenswrapper[4719]: > Nov 24 10:01:05 crc kubenswrapper[4719]: I1124 10:01:05.711534 4719 generic.go:334] "Generic (PLEG): container finished" podID="6ff61f4c-fc69-4299-987e-1c9ca3e1c633" containerID="7bdb0eb7e40e504045fe8b5a7bf60cad024d118e7981b149f473bbe2a48d9603" exitCode=0 Nov 24 10:01:05 crc kubenswrapper[4719]: I1124 10:01:05.711641 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399641-rwbnf" event={"ID":"6ff61f4c-fc69-4299-987e-1c9ca3e1c633","Type":"ContainerDied","Data":"7bdb0eb7e40e504045fe8b5a7bf60cad024d118e7981b149f473bbe2a48d9603"} Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.206014 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.332099 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-combined-ca-bundle\") pod \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.332218 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cjld\" (UniqueName: \"kubernetes.io/projected/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-kube-api-access-6cjld\") pod \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.332398 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-fernet-keys\") pod \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.332450 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-config-data\") pod \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\" (UID: \"6ff61f4c-fc69-4299-987e-1c9ca3e1c633\") " Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.345205 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-kube-api-access-6cjld" (OuterVolumeSpecName: "kube-api-access-6cjld") pod "6ff61f4c-fc69-4299-987e-1c9ca3e1c633" (UID: "6ff61f4c-fc69-4299-987e-1c9ca3e1c633"). InnerVolumeSpecName "kube-api-access-6cjld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.346325 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6ff61f4c-fc69-4299-987e-1c9ca3e1c633" (UID: "6ff61f4c-fc69-4299-987e-1c9ca3e1c633"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.386272 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ff61f4c-fc69-4299-987e-1c9ca3e1c633" (UID: "6ff61f4c-fc69-4299-987e-1c9ca3e1c633"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.401760 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-config-data" (OuterVolumeSpecName: "config-data") pod "6ff61f4c-fc69-4299-987e-1c9ca3e1c633" (UID: "6ff61f4c-fc69-4299-987e-1c9ca3e1c633"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.435974 4719 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.436285 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cjld\" (UniqueName: \"kubernetes.io/projected/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-kube-api-access-6cjld\") on node \"crc\" DevicePath \"\"" Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.436297 4719 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.436306 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff61f4c-fc69-4299-987e-1c9ca3e1c633-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.738834 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29399641-rwbnf" event={"ID":"6ff61f4c-fc69-4299-987e-1c9ca3e1c633","Type":"ContainerDied","Data":"6207e0ae01a573c024f90a4d0721d83efd27dc3b2575ef05d1296dacf32f299a"} Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.738874 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6207e0ae01a573c024f90a4d0721d83efd27dc3b2575ef05d1296dacf32f299a" Nov 24 10:01:07 crc kubenswrapper[4719]: I1124 10:01:07.738967 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29399641-rwbnf" Nov 24 10:01:12 crc kubenswrapper[4719]: I1124 10:01:12.986674 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pf6xd" podUID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" containerName="registry-server" probeResult="failure" output=< Nov 24 10:01:12 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 10:01:12 crc kubenswrapper[4719]: > Nov 24 10:01:22 crc kubenswrapper[4719]: I1124 10:01:22.977812 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pf6xd" podUID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" containerName="registry-server" probeResult="failure" output=< Nov 24 10:01:22 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 10:01:22 crc kubenswrapper[4719]: > Nov 24 10:01:31 crc kubenswrapper[4719]: I1124 10:01:31.998380 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:01:32 crc kubenswrapper[4719]: I1124 10:01:32.046244 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:01:32 crc kubenswrapper[4719]: I1124 10:01:32.227598 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pf6xd"] Nov 24 10:01:34 crc kubenswrapper[4719]: I1124 10:01:34.008207 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pf6xd" podUID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" containerName="registry-server" containerID="cri-o://fdab7af16d14901f1826d578e643001b140345fad9f3f3eadd265fdb6eab1d63" gracePeriod=2 Nov 24 10:01:34 crc kubenswrapper[4719]: I1124 10:01:34.828686 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:01:34 crc kubenswrapper[4719]: I1124 10:01:34.944942 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwfhj\" (UniqueName: \"kubernetes.io/projected/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-kube-api-access-bwfhj\") pod \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\" (UID: \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\") " Nov 24 10:01:34 crc kubenswrapper[4719]: I1124 10:01:34.945356 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-utilities\") pod \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\" (UID: \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\") " Nov 24 10:01:34 crc kubenswrapper[4719]: I1124 10:01:34.945406 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-catalog-content\") pod \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\" (UID: \"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb\") " Nov 24 10:01:34 crc kubenswrapper[4719]: I1124 10:01:34.946176 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-utilities" (OuterVolumeSpecName: "utilities") pod "259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" (UID: "259d142a-2df8-44ce-99f8-1e5c9f8e7cfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:01:34 crc kubenswrapper[4719]: I1124 10:01:34.964766 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-kube-api-access-bwfhj" (OuterVolumeSpecName: "kube-api-access-bwfhj") pod "259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" (UID: "259d142a-2df8-44ce-99f8-1e5c9f8e7cfb"). InnerVolumeSpecName "kube-api-access-bwfhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.019264 4719 generic.go:334] "Generic (PLEG): container finished" podID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" containerID="fdab7af16d14901f1826d578e643001b140345fad9f3f3eadd265fdb6eab1d63" exitCode=0 Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.019308 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pf6xd" event={"ID":"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb","Type":"ContainerDied","Data":"fdab7af16d14901f1826d578e643001b140345fad9f3f3eadd265fdb6eab1d63"} Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.019336 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pf6xd" event={"ID":"259d142a-2df8-44ce-99f8-1e5c9f8e7cfb","Type":"ContainerDied","Data":"0b21dff930881c44357c465f27a723c160f7a4326d150644efc05d8b2451cb5f"} Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.019358 4719 scope.go:117] "RemoveContainer" containerID="fdab7af16d14901f1826d578e643001b140345fad9f3f3eadd265fdb6eab1d63" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.019622 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pf6xd" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.046263 4719 scope.go:117] "RemoveContainer" containerID="70edf73ba2d30fa0790f42a55dea54b9933084c4a62bb3679017b27a95cbe7df" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.047486 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwfhj\" (UniqueName: \"kubernetes.io/projected/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-kube-api-access-bwfhj\") on node \"crc\" DevicePath \"\"" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.047514 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.068699 4719 scope.go:117] "RemoveContainer" containerID="c9d1c69e4a13cc1b996059699ad9a45a1fd66e81b1665c7c6b26a53ed41cbfb3" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.083389 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" (UID: "259d142a-2df8-44ce-99f8-1e5c9f8e7cfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.108848 4719 scope.go:117] "RemoveContainer" containerID="fdab7af16d14901f1826d578e643001b140345fad9f3f3eadd265fdb6eab1d63" Nov 24 10:01:35 crc kubenswrapper[4719]: E1124 10:01:35.109256 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdab7af16d14901f1826d578e643001b140345fad9f3f3eadd265fdb6eab1d63\": container with ID starting with fdab7af16d14901f1826d578e643001b140345fad9f3f3eadd265fdb6eab1d63 not found: ID does not exist" containerID="fdab7af16d14901f1826d578e643001b140345fad9f3f3eadd265fdb6eab1d63" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.109348 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdab7af16d14901f1826d578e643001b140345fad9f3f3eadd265fdb6eab1d63"} err="failed to get container status \"fdab7af16d14901f1826d578e643001b140345fad9f3f3eadd265fdb6eab1d63\": rpc error: code = NotFound desc = could not find container \"fdab7af16d14901f1826d578e643001b140345fad9f3f3eadd265fdb6eab1d63\": container with ID starting with fdab7af16d14901f1826d578e643001b140345fad9f3f3eadd265fdb6eab1d63 not found: ID does not exist" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.109431 4719 scope.go:117] "RemoveContainer" containerID="70edf73ba2d30fa0790f42a55dea54b9933084c4a62bb3679017b27a95cbe7df" Nov 24 10:01:35 crc kubenswrapper[4719]: E1124 10:01:35.109788 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70edf73ba2d30fa0790f42a55dea54b9933084c4a62bb3679017b27a95cbe7df\": container with ID starting with 70edf73ba2d30fa0790f42a55dea54b9933084c4a62bb3679017b27a95cbe7df not found: ID does not exist" containerID="70edf73ba2d30fa0790f42a55dea54b9933084c4a62bb3679017b27a95cbe7df" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.109865 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70edf73ba2d30fa0790f42a55dea54b9933084c4a62bb3679017b27a95cbe7df"} err="failed to get container status \"70edf73ba2d30fa0790f42a55dea54b9933084c4a62bb3679017b27a95cbe7df\": rpc error: code = NotFound desc = could not find container \"70edf73ba2d30fa0790f42a55dea54b9933084c4a62bb3679017b27a95cbe7df\": container with ID starting with 70edf73ba2d30fa0790f42a55dea54b9933084c4a62bb3679017b27a95cbe7df not found: ID does not exist" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.109956 4719 scope.go:117] "RemoveContainer" containerID="c9d1c69e4a13cc1b996059699ad9a45a1fd66e81b1665c7c6b26a53ed41cbfb3" Nov 24 10:01:35 crc kubenswrapper[4719]: E1124 10:01:35.110215 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9d1c69e4a13cc1b996059699ad9a45a1fd66e81b1665c7c6b26a53ed41cbfb3\": container with ID starting with c9d1c69e4a13cc1b996059699ad9a45a1fd66e81b1665c7c6b26a53ed41cbfb3 not found: ID does not exist" containerID="c9d1c69e4a13cc1b996059699ad9a45a1fd66e81b1665c7c6b26a53ed41cbfb3" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.110287 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9d1c69e4a13cc1b996059699ad9a45a1fd66e81b1665c7c6b26a53ed41cbfb3"} err="failed to get container status \"c9d1c69e4a13cc1b996059699ad9a45a1fd66e81b1665c7c6b26a53ed41cbfb3\": rpc error: code = NotFound desc = could not find container \"c9d1c69e4a13cc1b996059699ad9a45a1fd66e81b1665c7c6b26a53ed41cbfb3\": container with ID starting with c9d1c69e4a13cc1b996059699ad9a45a1fd66e81b1665c7c6b26a53ed41cbfb3 not found: ID does not exist" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.149257 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.355378 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pf6xd"] Nov 24 10:01:35 crc kubenswrapper[4719]: I1124 10:01:35.379291 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pf6xd"] Nov 24 10:01:36 crc kubenswrapper[4719]: I1124 10:01:36.536973 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" path="/var/lib/kubelet/pods/259d142a-2df8-44ce-99f8-1e5c9f8e7cfb/volumes" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.720966 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jrqzv"] Nov 24 10:02:29 crc kubenswrapper[4719]: E1124 10:02:29.722073 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff61f4c-fc69-4299-987e-1c9ca3e1c633" containerName="keystone-cron" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.722093 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff61f4c-fc69-4299-987e-1c9ca3e1c633" containerName="keystone-cron" Nov 24 10:02:29 crc kubenswrapper[4719]: E1124 10:02:29.722118 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" containerName="registry-server" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.722130 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" containerName="registry-server" Nov 24 10:02:29 crc kubenswrapper[4719]: E1124 10:02:29.722147 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" containerName="extract-content" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.722159 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" containerName="extract-content" Nov 24 10:02:29 crc kubenswrapper[4719]: E1124 10:02:29.722187 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" containerName="extract-utilities" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.722198 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" containerName="extract-utilities" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.722539 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="259d142a-2df8-44ce-99f8-1e5c9f8e7cfb" containerName="registry-server" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.722561 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff61f4c-fc69-4299-987e-1c9ca3e1c633" containerName="keystone-cron" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.724859 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.745774 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jrqzv"] Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.828740 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g96n9\" (UniqueName: \"kubernetes.io/projected/292ee1b6-6963-432b-a926-833b733c28dc-kube-api-access-g96n9\") pod \"certified-operators-jrqzv\" (UID: \"292ee1b6-6963-432b-a926-833b733c28dc\") " pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.828849 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292ee1b6-6963-432b-a926-833b733c28dc-catalog-content\") pod \"certified-operators-jrqzv\" (UID: \"292ee1b6-6963-432b-a926-833b733c28dc\") " pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.828887 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292ee1b6-6963-432b-a926-833b733c28dc-utilities\") pod \"certified-operators-jrqzv\" (UID: \"292ee1b6-6963-432b-a926-833b733c28dc\") " pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.930834 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292ee1b6-6963-432b-a926-833b733c28dc-utilities\") pod \"certified-operators-jrqzv\" (UID: \"292ee1b6-6963-432b-a926-833b733c28dc\") " pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.931210 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g96n9\" (UniqueName: \"kubernetes.io/projected/292ee1b6-6963-432b-a926-833b733c28dc-kube-api-access-g96n9\") pod \"certified-operators-jrqzv\" (UID: \"292ee1b6-6963-432b-a926-833b733c28dc\") " pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.931353 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292ee1b6-6963-432b-a926-833b733c28dc-catalog-content\") pod \"certified-operators-jrqzv\" (UID: \"292ee1b6-6963-432b-a926-833b733c28dc\") " pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.931449 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292ee1b6-6963-432b-a926-833b733c28dc-utilities\") pod \"certified-operators-jrqzv\" (UID: \"292ee1b6-6963-432b-a926-833b733c28dc\") " pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:29 crc kubenswrapper[4719]: I1124 10:02:29.931831 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292ee1b6-6963-432b-a926-833b733c28dc-catalog-content\") pod \"certified-operators-jrqzv\" (UID: \"292ee1b6-6963-432b-a926-833b733c28dc\") " pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:30 crc kubenswrapper[4719]: I1124 10:02:30.012740 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g96n9\" (UniqueName: \"kubernetes.io/projected/292ee1b6-6963-432b-a926-833b733c28dc-kube-api-access-g96n9\") pod \"certified-operators-jrqzv\" (UID: \"292ee1b6-6963-432b-a926-833b733c28dc\") " pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:30 crc kubenswrapper[4719]: I1124 10:02:30.060007 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:30 crc kubenswrapper[4719]: I1124 10:02:30.581453 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jrqzv"] Nov 24 10:02:31 crc kubenswrapper[4719]: I1124 10:02:31.545757 4719 generic.go:334] "Generic (PLEG): container finished" podID="292ee1b6-6963-432b-a926-833b733c28dc" containerID="6528bca8111bf3a20ca53111ea9f543f4aab0b580b537f58e608aa018321024f" exitCode=0 Nov 24 10:02:31 crc kubenswrapper[4719]: I1124 10:02:31.545799 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrqzv" event={"ID":"292ee1b6-6963-432b-a926-833b733c28dc","Type":"ContainerDied","Data":"6528bca8111bf3a20ca53111ea9f543f4aab0b580b537f58e608aa018321024f"} Nov 24 10:02:31 crc kubenswrapper[4719]: I1124 10:02:31.545837 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrqzv" event={"ID":"292ee1b6-6963-432b-a926-833b733c28dc","Type":"ContainerStarted","Data":"18c7c4e1ea255fa341800fbd204befab95f124ae76006a19c62ebf0530c01b5e"} Nov 24 10:02:32 crc kubenswrapper[4719]: I1124 10:02:32.567610 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrqzv" event={"ID":"292ee1b6-6963-432b-a926-833b733c28dc","Type":"ContainerStarted","Data":"4d39d4aa958018da36a5d34282d3e60f178752dfeaa735624410ea54d6d50ade"} Nov 24 10:02:34 crc kubenswrapper[4719]: I1124 10:02:34.597161 4719 generic.go:334] "Generic (PLEG): container finished" podID="292ee1b6-6963-432b-a926-833b733c28dc" containerID="4d39d4aa958018da36a5d34282d3e60f178752dfeaa735624410ea54d6d50ade" exitCode=0 Nov 24 10:02:34 crc kubenswrapper[4719]: I1124 10:02:34.597248 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrqzv" event={"ID":"292ee1b6-6963-432b-a926-833b733c28dc","Type":"ContainerDied","Data":"4d39d4aa958018da36a5d34282d3e60f178752dfeaa735624410ea54d6d50ade"} Nov 24 10:02:35 crc kubenswrapper[4719]: I1124 10:02:35.606939 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrqzv" event={"ID":"292ee1b6-6963-432b-a926-833b733c28dc","Type":"ContainerStarted","Data":"968cca123e1997e6510addf8875fd3494fcf5e4c9c638c18f7d4d914d5159545"} Nov 24 10:02:35 crc kubenswrapper[4719]: I1124 10:02:35.631220 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jrqzv" podStartSLOduration=3.126479734 podStartE2EDuration="6.631201392s" podCreationTimestamp="2025-11-24 10:02:29 +0000 UTC" firstStartedPulling="2025-11-24 10:02:31.547610189 +0000 UTC m=+4127.878883441" lastFinishedPulling="2025-11-24 10:02:35.052331837 +0000 UTC m=+4131.383605099" observedRunningTime="2025-11-24 10:02:35.624566773 +0000 UTC m=+4131.955840035" watchObservedRunningTime="2025-11-24 10:02:35.631201392 +0000 UTC m=+4131.962474644" Nov 24 10:02:40 crc kubenswrapper[4719]: I1124 10:02:40.061150 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:40 crc kubenswrapper[4719]: I1124 10:02:40.061528 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:40 crc kubenswrapper[4719]: I1124 10:02:40.768378 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:40 crc kubenswrapper[4719]: I1124 10:02:40.833541 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:41 crc kubenswrapper[4719]: I1124 10:02:41.007826 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jrqzv"] Nov 24 10:02:42 crc kubenswrapper[4719]: I1124 10:02:42.662785 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jrqzv" podUID="292ee1b6-6963-432b-a926-833b733c28dc" containerName="registry-server" containerID="cri-o://968cca123e1997e6510addf8875fd3494fcf5e4c9c638c18f7d4d914d5159545" gracePeriod=2 Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.217861 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.295126 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292ee1b6-6963-432b-a926-833b733c28dc-catalog-content\") pod \"292ee1b6-6963-432b-a926-833b733c28dc\" (UID: \"292ee1b6-6963-432b-a926-833b733c28dc\") " Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.295335 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g96n9\" (UniqueName: \"kubernetes.io/projected/292ee1b6-6963-432b-a926-833b733c28dc-kube-api-access-g96n9\") pod \"292ee1b6-6963-432b-a926-833b733c28dc\" (UID: \"292ee1b6-6963-432b-a926-833b733c28dc\") " Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.295381 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292ee1b6-6963-432b-a926-833b733c28dc-utilities\") pod \"292ee1b6-6963-432b-a926-833b733c28dc\" (UID: \"292ee1b6-6963-432b-a926-833b733c28dc\") " Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.296356 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/292ee1b6-6963-432b-a926-833b733c28dc-utilities" (OuterVolumeSpecName: "utilities") pod "292ee1b6-6963-432b-a926-833b733c28dc" (UID: "292ee1b6-6963-432b-a926-833b733c28dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.300235 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/292ee1b6-6963-432b-a926-833b733c28dc-kube-api-access-g96n9" (OuterVolumeSpecName: "kube-api-access-g96n9") pod "292ee1b6-6963-432b-a926-833b733c28dc" (UID: "292ee1b6-6963-432b-a926-833b733c28dc"). InnerVolumeSpecName "kube-api-access-g96n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.341145 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/292ee1b6-6963-432b-a926-833b733c28dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "292ee1b6-6963-432b-a926-833b733c28dc" (UID: "292ee1b6-6963-432b-a926-833b733c28dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.398222 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292ee1b6-6963-432b-a926-833b733c28dc-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.398282 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292ee1b6-6963-432b-a926-833b733c28dc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.398304 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g96n9\" (UniqueName: \"kubernetes.io/projected/292ee1b6-6963-432b-a926-833b733c28dc-kube-api-access-g96n9\") on node \"crc\" DevicePath \"\"" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.673377 4719 generic.go:334] "Generic (PLEG): container finished" podID="292ee1b6-6963-432b-a926-833b733c28dc" containerID="968cca123e1997e6510addf8875fd3494fcf5e4c9c638c18f7d4d914d5159545" exitCode=0 Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.673432 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrqzv" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.673423 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrqzv" event={"ID":"292ee1b6-6963-432b-a926-833b733c28dc","Type":"ContainerDied","Data":"968cca123e1997e6510addf8875fd3494fcf5e4c9c638c18f7d4d914d5159545"} Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.673560 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrqzv" event={"ID":"292ee1b6-6963-432b-a926-833b733c28dc","Type":"ContainerDied","Data":"18c7c4e1ea255fa341800fbd204befab95f124ae76006a19c62ebf0530c01b5e"} Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.673578 4719 scope.go:117] "RemoveContainer" containerID="968cca123e1997e6510addf8875fd3494fcf5e4c9c638c18f7d4d914d5159545" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.698659 4719 scope.go:117] "RemoveContainer" containerID="4d39d4aa958018da36a5d34282d3e60f178752dfeaa735624410ea54d6d50ade" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.709015 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jrqzv"] Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.718251 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jrqzv"] Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.733440 4719 scope.go:117] "RemoveContainer" containerID="6528bca8111bf3a20ca53111ea9f543f4aab0b580b537f58e608aa018321024f" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.776531 4719 scope.go:117] "RemoveContainer" containerID="968cca123e1997e6510addf8875fd3494fcf5e4c9c638c18f7d4d914d5159545" Nov 24 10:02:43 crc kubenswrapper[4719]: E1124 10:02:43.777437 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968cca123e1997e6510addf8875fd3494fcf5e4c9c638c18f7d4d914d5159545\": container with ID starting with 968cca123e1997e6510addf8875fd3494fcf5e4c9c638c18f7d4d914d5159545 not found: ID does not exist" containerID="968cca123e1997e6510addf8875fd3494fcf5e4c9c638c18f7d4d914d5159545" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.777599 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968cca123e1997e6510addf8875fd3494fcf5e4c9c638c18f7d4d914d5159545"} err="failed to get container status \"968cca123e1997e6510addf8875fd3494fcf5e4c9c638c18f7d4d914d5159545\": rpc error: code = NotFound desc = could not find container \"968cca123e1997e6510addf8875fd3494fcf5e4c9c638c18f7d4d914d5159545\": container with ID starting with 968cca123e1997e6510addf8875fd3494fcf5e4c9c638c18f7d4d914d5159545 not found: ID does not exist" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.777712 4719 scope.go:117] "RemoveContainer" containerID="4d39d4aa958018da36a5d34282d3e60f178752dfeaa735624410ea54d6d50ade" Nov 24 10:02:43 crc kubenswrapper[4719]: E1124 10:02:43.778249 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d39d4aa958018da36a5d34282d3e60f178752dfeaa735624410ea54d6d50ade\": container with ID starting with 4d39d4aa958018da36a5d34282d3e60f178752dfeaa735624410ea54d6d50ade not found: ID does not exist" containerID="4d39d4aa958018da36a5d34282d3e60f178752dfeaa735624410ea54d6d50ade" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.778303 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d39d4aa958018da36a5d34282d3e60f178752dfeaa735624410ea54d6d50ade"} err="failed to get container status \"4d39d4aa958018da36a5d34282d3e60f178752dfeaa735624410ea54d6d50ade\": rpc error: code = NotFound desc = could not find container \"4d39d4aa958018da36a5d34282d3e60f178752dfeaa735624410ea54d6d50ade\": container with ID starting with 4d39d4aa958018da36a5d34282d3e60f178752dfeaa735624410ea54d6d50ade not found: ID does not exist" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.778338 4719 scope.go:117] "RemoveContainer" containerID="6528bca8111bf3a20ca53111ea9f543f4aab0b580b537f58e608aa018321024f" Nov 24 10:02:43 crc kubenswrapper[4719]: E1124 10:02:43.778968 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6528bca8111bf3a20ca53111ea9f543f4aab0b580b537f58e608aa018321024f\": container with ID starting with 6528bca8111bf3a20ca53111ea9f543f4aab0b580b537f58e608aa018321024f not found: ID does not exist" containerID="6528bca8111bf3a20ca53111ea9f543f4aab0b580b537f58e608aa018321024f" Nov 24 10:02:43 crc kubenswrapper[4719]: I1124 10:02:43.779097 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6528bca8111bf3a20ca53111ea9f543f4aab0b580b537f58e608aa018321024f"} err="failed to get container status \"6528bca8111bf3a20ca53111ea9f543f4aab0b580b537f58e608aa018321024f\": rpc error: code = NotFound desc = could not find container \"6528bca8111bf3a20ca53111ea9f543f4aab0b580b537f58e608aa018321024f\": container with ID starting with 6528bca8111bf3a20ca53111ea9f543f4aab0b580b537f58e608aa018321024f not found: ID does not exist" Nov 24 10:02:44 crc kubenswrapper[4719]: I1124 10:02:44.531602 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="292ee1b6-6963-432b-a926-833b733c28dc" path="/var/lib/kubelet/pods/292ee1b6-6963-432b-a926-833b733c28dc/volumes" Nov 24 10:03:04 crc kubenswrapper[4719]: I1124 10:03:04.561670 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:03:04 crc kubenswrapper[4719]: I1124 10:03:04.562199 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.586309 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xrj7m"] Nov 24 10:03:15 crc kubenswrapper[4719]: E1124 10:03:15.598429 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292ee1b6-6963-432b-a926-833b733c28dc" containerName="extract-content" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.600690 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="292ee1b6-6963-432b-a926-833b733c28dc" containerName="extract-content" Nov 24 10:03:15 crc kubenswrapper[4719]: E1124 10:03:15.600741 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292ee1b6-6963-432b-a926-833b733c28dc" containerName="registry-server" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.600748 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="292ee1b6-6963-432b-a926-833b733c28dc" containerName="registry-server" Nov 24 10:03:15 crc kubenswrapper[4719]: E1124 10:03:15.600791 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292ee1b6-6963-432b-a926-833b733c28dc" containerName="extract-utilities" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.600800 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="292ee1b6-6963-432b-a926-833b733c28dc" containerName="extract-utilities" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.601313 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="292ee1b6-6963-432b-a926-833b733c28dc" containerName="registry-server" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.615084 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.628740 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrj7m"] Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.792363 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec212e4-1383-4039-bd1c-21a718c9adfe-utilities\") pod \"community-operators-xrj7m\" (UID: \"0ec212e4-1383-4039-bd1c-21a718c9adfe\") " pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.792443 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec212e4-1383-4039-bd1c-21a718c9adfe-catalog-content\") pod \"community-operators-xrj7m\" (UID: \"0ec212e4-1383-4039-bd1c-21a718c9adfe\") " pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.792506 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wgrm\" (UniqueName: \"kubernetes.io/projected/0ec212e4-1383-4039-bd1c-21a718c9adfe-kube-api-access-7wgrm\") pod \"community-operators-xrj7m\" (UID: \"0ec212e4-1383-4039-bd1c-21a718c9adfe\") " pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.894228 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec212e4-1383-4039-bd1c-21a718c9adfe-utilities\") pod \"community-operators-xrj7m\" (UID: \"0ec212e4-1383-4039-bd1c-21a718c9adfe\") " pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.894286 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec212e4-1383-4039-bd1c-21a718c9adfe-catalog-content\") pod \"community-operators-xrj7m\" (UID: \"0ec212e4-1383-4039-bd1c-21a718c9adfe\") " pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.894339 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wgrm\" (UniqueName: \"kubernetes.io/projected/0ec212e4-1383-4039-bd1c-21a718c9adfe-kube-api-access-7wgrm\") pod \"community-operators-xrj7m\" (UID: \"0ec212e4-1383-4039-bd1c-21a718c9adfe\") " pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.894786 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec212e4-1383-4039-bd1c-21a718c9adfe-catalog-content\") pod \"community-operators-xrj7m\" (UID: \"0ec212e4-1383-4039-bd1c-21a718c9adfe\") " pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.895056 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec212e4-1383-4039-bd1c-21a718c9adfe-utilities\") pod \"community-operators-xrj7m\" (UID: \"0ec212e4-1383-4039-bd1c-21a718c9adfe\") " pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.914839 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wgrm\" (UniqueName: \"kubernetes.io/projected/0ec212e4-1383-4039-bd1c-21a718c9adfe-kube-api-access-7wgrm\") pod \"community-operators-xrj7m\" (UID: \"0ec212e4-1383-4039-bd1c-21a718c9adfe\") " pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:15 crc kubenswrapper[4719]: I1124 10:03:15.944160 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:16 crc kubenswrapper[4719]: I1124 10:03:16.552738 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrj7m"] Nov 24 10:03:17 crc kubenswrapper[4719]: I1124 10:03:17.000779 4719 generic.go:334] "Generic (PLEG): container finished" podID="0ec212e4-1383-4039-bd1c-21a718c9adfe" containerID="64a9822667b6b6dc8e1b4c95dbce2da9babd601f6f9274758f07c3e26c4533f1" exitCode=0 Nov 24 10:03:17 crc kubenswrapper[4719]: I1124 10:03:17.000828 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrj7m" event={"ID":"0ec212e4-1383-4039-bd1c-21a718c9adfe","Type":"ContainerDied","Data":"64a9822667b6b6dc8e1b4c95dbce2da9babd601f6f9274758f07c3e26c4533f1"} Nov 24 10:03:17 crc kubenswrapper[4719]: I1124 10:03:17.001058 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrj7m" event={"ID":"0ec212e4-1383-4039-bd1c-21a718c9adfe","Type":"ContainerStarted","Data":"de1f2a38dc74bb987209602421406476d3f4950faa4839d10848bbe85ee4e990"} Nov 24 10:03:18 crc kubenswrapper[4719]: I1124 10:03:18.011626 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrj7m" event={"ID":"0ec212e4-1383-4039-bd1c-21a718c9adfe","Type":"ContainerStarted","Data":"5c84582f0030e6317d99f1a9dc5273e690381fad83ba4f0cd08e494dd14c5275"} Nov 24 10:03:20 crc kubenswrapper[4719]: I1124 10:03:20.035947 4719 generic.go:334] "Generic (PLEG): container finished" podID="0ec212e4-1383-4039-bd1c-21a718c9adfe" containerID="5c84582f0030e6317d99f1a9dc5273e690381fad83ba4f0cd08e494dd14c5275" exitCode=0 Nov 24 10:03:20 crc kubenswrapper[4719]: I1124 10:03:20.036077 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrj7m" event={"ID":"0ec212e4-1383-4039-bd1c-21a718c9adfe","Type":"ContainerDied","Data":"5c84582f0030e6317d99f1a9dc5273e690381fad83ba4f0cd08e494dd14c5275"} Nov 24 10:03:21 crc kubenswrapper[4719]: I1124 10:03:21.048253 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrj7m" event={"ID":"0ec212e4-1383-4039-bd1c-21a718c9adfe","Type":"ContainerStarted","Data":"ec0f2eac57f021f231d3524542beef5707512ae6f22c6c1bd2afeee59ac8f3b0"} Nov 24 10:03:21 crc kubenswrapper[4719]: I1124 10:03:21.069256 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xrj7m" podStartSLOduration=2.435262848 podStartE2EDuration="6.069242034s" podCreationTimestamp="2025-11-24 10:03:15 +0000 UTC" firstStartedPulling="2025-11-24 10:03:17.002443487 +0000 UTC m=+4173.333716739" lastFinishedPulling="2025-11-24 10:03:20.636422663 +0000 UTC m=+4176.967695925" observedRunningTime="2025-11-24 10:03:21.065549189 +0000 UTC m=+4177.396822441" watchObservedRunningTime="2025-11-24 10:03:21.069242034 +0000 UTC m=+4177.400515286" Nov 24 10:03:25 crc kubenswrapper[4719]: I1124 10:03:25.945012 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:25 crc kubenswrapper[4719]: I1124 10:03:25.945521 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:26 crc kubenswrapper[4719]: I1124 10:03:26.000340 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:26 crc kubenswrapper[4719]: I1124 10:03:26.136551 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:26 crc kubenswrapper[4719]: I1124 10:03:26.239407 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xrj7m"] Nov 24 10:03:28 crc kubenswrapper[4719]: I1124 10:03:28.101589 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xrj7m" podUID="0ec212e4-1383-4039-bd1c-21a718c9adfe" containerName="registry-server" containerID="cri-o://ec0f2eac57f021f231d3524542beef5707512ae6f22c6c1bd2afeee59ac8f3b0" gracePeriod=2 Nov 24 10:03:28 crc kubenswrapper[4719]: I1124 10:03:28.622340 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:28 crc kubenswrapper[4719]: I1124 10:03:28.654473 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec212e4-1383-4039-bd1c-21a718c9adfe-utilities\") pod \"0ec212e4-1383-4039-bd1c-21a718c9adfe\" (UID: \"0ec212e4-1383-4039-bd1c-21a718c9adfe\") " Nov 24 10:03:28 crc kubenswrapper[4719]: I1124 10:03:28.654573 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec212e4-1383-4039-bd1c-21a718c9adfe-catalog-content\") pod \"0ec212e4-1383-4039-bd1c-21a718c9adfe\" (UID: \"0ec212e4-1383-4039-bd1c-21a718c9adfe\") " Nov 24 10:03:28 crc kubenswrapper[4719]: I1124 10:03:28.654710 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wgrm\" (UniqueName: \"kubernetes.io/projected/0ec212e4-1383-4039-bd1c-21a718c9adfe-kube-api-access-7wgrm\") pod \"0ec212e4-1383-4039-bd1c-21a718c9adfe\" (UID: \"0ec212e4-1383-4039-bd1c-21a718c9adfe\") " Nov 24 10:03:28 crc kubenswrapper[4719]: I1124 10:03:28.655725 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec212e4-1383-4039-bd1c-21a718c9adfe-utilities" (OuterVolumeSpecName: "utilities") pod "0ec212e4-1383-4039-bd1c-21a718c9adfe" (UID: "0ec212e4-1383-4039-bd1c-21a718c9adfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:03:28 crc kubenswrapper[4719]: I1124 10:03:28.661548 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec212e4-1383-4039-bd1c-21a718c9adfe-kube-api-access-7wgrm" (OuterVolumeSpecName: "kube-api-access-7wgrm") pod "0ec212e4-1383-4039-bd1c-21a718c9adfe" (UID: "0ec212e4-1383-4039-bd1c-21a718c9adfe"). InnerVolumeSpecName "kube-api-access-7wgrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:03:28 crc kubenswrapper[4719]: I1124 10:03:28.705185 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec212e4-1383-4039-bd1c-21a718c9adfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ec212e4-1383-4039-bd1c-21a718c9adfe" (UID: "0ec212e4-1383-4039-bd1c-21a718c9adfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:03:28 crc kubenswrapper[4719]: I1124 10:03:28.756510 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wgrm\" (UniqueName: \"kubernetes.io/projected/0ec212e4-1383-4039-bd1c-21a718c9adfe-kube-api-access-7wgrm\") on node \"crc\" DevicePath \"\"" Nov 24 10:03:28 crc kubenswrapper[4719]: I1124 10:03:28.756551 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec212e4-1383-4039-bd1c-21a718c9adfe-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 10:03:28 crc kubenswrapper[4719]: I1124 10:03:28.756561 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec212e4-1383-4039-bd1c-21a718c9adfe-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.109577 4719 generic.go:334] "Generic (PLEG): container finished" podID="0ec212e4-1383-4039-bd1c-21a718c9adfe" containerID="ec0f2eac57f021f231d3524542beef5707512ae6f22c6c1bd2afeee59ac8f3b0" exitCode=0 Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.109639 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrj7m" event={"ID":"0ec212e4-1383-4039-bd1c-21a718c9adfe","Type":"ContainerDied","Data":"ec0f2eac57f021f231d3524542beef5707512ae6f22c6c1bd2afeee59ac8f3b0"} Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.109664 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrj7m" event={"ID":"0ec212e4-1383-4039-bd1c-21a718c9adfe","Type":"ContainerDied","Data":"de1f2a38dc74bb987209602421406476d3f4950faa4839d10848bbe85ee4e990"} Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.109682 4719 scope.go:117] "RemoveContainer" containerID="ec0f2eac57f021f231d3524542beef5707512ae6f22c6c1bd2afeee59ac8f3b0" Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.109799 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrj7m" Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.140348 4719 scope.go:117] "RemoveContainer" containerID="5c84582f0030e6317d99f1a9dc5273e690381fad83ba4f0cd08e494dd14c5275" Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.163127 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xrj7m"] Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.195119 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xrj7m"] Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.198566 4719 scope.go:117] "RemoveContainer" containerID="64a9822667b6b6dc8e1b4c95dbce2da9babd601f6f9274758f07c3e26c4533f1" Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.222914 4719 scope.go:117] "RemoveContainer" containerID="ec0f2eac57f021f231d3524542beef5707512ae6f22c6c1bd2afeee59ac8f3b0" Nov 24 10:03:29 crc kubenswrapper[4719]: E1124 10:03:29.223466 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec0f2eac57f021f231d3524542beef5707512ae6f22c6c1bd2afeee59ac8f3b0\": container with ID starting with ec0f2eac57f021f231d3524542beef5707512ae6f22c6c1bd2afeee59ac8f3b0 not found: ID does not exist" containerID="ec0f2eac57f021f231d3524542beef5707512ae6f22c6c1bd2afeee59ac8f3b0" Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.223500 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec0f2eac57f021f231d3524542beef5707512ae6f22c6c1bd2afeee59ac8f3b0"} err="failed to get container status \"ec0f2eac57f021f231d3524542beef5707512ae6f22c6c1bd2afeee59ac8f3b0\": rpc error: code = NotFound desc = could not find container \"ec0f2eac57f021f231d3524542beef5707512ae6f22c6c1bd2afeee59ac8f3b0\": container with ID starting with ec0f2eac57f021f231d3524542beef5707512ae6f22c6c1bd2afeee59ac8f3b0 not found: ID does not exist" Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.223520 4719 scope.go:117] "RemoveContainer" containerID="5c84582f0030e6317d99f1a9dc5273e690381fad83ba4f0cd08e494dd14c5275" Nov 24 10:03:29 crc kubenswrapper[4719]: E1124 10:03:29.223885 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c84582f0030e6317d99f1a9dc5273e690381fad83ba4f0cd08e494dd14c5275\": container with ID starting with 5c84582f0030e6317d99f1a9dc5273e690381fad83ba4f0cd08e494dd14c5275 not found: ID does not exist" containerID="5c84582f0030e6317d99f1a9dc5273e690381fad83ba4f0cd08e494dd14c5275" Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.223909 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c84582f0030e6317d99f1a9dc5273e690381fad83ba4f0cd08e494dd14c5275"} err="failed to get container status \"5c84582f0030e6317d99f1a9dc5273e690381fad83ba4f0cd08e494dd14c5275\": rpc error: code = NotFound desc = could not find container \"5c84582f0030e6317d99f1a9dc5273e690381fad83ba4f0cd08e494dd14c5275\": container with ID starting with 5c84582f0030e6317d99f1a9dc5273e690381fad83ba4f0cd08e494dd14c5275 not found: ID does not exist" Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.223927 4719 scope.go:117] "RemoveContainer" containerID="64a9822667b6b6dc8e1b4c95dbce2da9babd601f6f9274758f07c3e26c4533f1" Nov 24 10:03:29 crc kubenswrapper[4719]: E1124 10:03:29.224255 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64a9822667b6b6dc8e1b4c95dbce2da9babd601f6f9274758f07c3e26c4533f1\": container with ID starting with 64a9822667b6b6dc8e1b4c95dbce2da9babd601f6f9274758f07c3e26c4533f1 not found: ID does not exist" containerID="64a9822667b6b6dc8e1b4c95dbce2da9babd601f6f9274758f07c3e26c4533f1" Nov 24 10:03:29 crc kubenswrapper[4719]: I1124 10:03:29.224298 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a9822667b6b6dc8e1b4c95dbce2da9babd601f6f9274758f07c3e26c4533f1"} err="failed to get container status \"64a9822667b6b6dc8e1b4c95dbce2da9babd601f6f9274758f07c3e26c4533f1\": rpc error: code = NotFound desc = could not find container \"64a9822667b6b6dc8e1b4c95dbce2da9babd601f6f9274758f07c3e26c4533f1\": container with ID starting with 64a9822667b6b6dc8e1b4c95dbce2da9babd601f6f9274758f07c3e26c4533f1 not found: ID does not exist" Nov 24 10:03:30 crc kubenswrapper[4719]: I1124 10:03:30.539667 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec212e4-1383-4039-bd1c-21a718c9adfe" path="/var/lib/kubelet/pods/0ec212e4-1383-4039-bd1c-21a718c9adfe/volumes" Nov 24 10:03:34 crc kubenswrapper[4719]: I1124 10:03:34.562117 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:03:34 crc kubenswrapper[4719]: I1124 10:03:34.562728 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:04:04 crc kubenswrapper[4719]: I1124 10:04:04.562062 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:04:04 crc kubenswrapper[4719]: I1124 10:04:04.562596 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:04:04 crc kubenswrapper[4719]: I1124 10:04:04.562645 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 10:04:04 crc kubenswrapper[4719]: I1124 10:04:04.563496 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22a40b292aa2c73b1bc4ad790f908be2ce33655290a1fd793eac90657829c15d"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 10:04:04 crc kubenswrapper[4719]: I1124 10:04:04.563568 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://22a40b292aa2c73b1bc4ad790f908be2ce33655290a1fd793eac90657829c15d" gracePeriod=600 Nov 24 10:04:05 crc kubenswrapper[4719]: I1124 10:04:05.446813 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="22a40b292aa2c73b1bc4ad790f908be2ce33655290a1fd793eac90657829c15d" exitCode=0 Nov 24 10:04:05 crc kubenswrapper[4719]: I1124 10:04:05.447071 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"22a40b292aa2c73b1bc4ad790f908be2ce33655290a1fd793eac90657829c15d"} Nov 24 10:04:05 crc kubenswrapper[4719]: I1124 10:04:05.447217 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de"} Nov 24 10:04:05 crc kubenswrapper[4719]: I1124 10:04:05.447238 4719 scope.go:117] "RemoveContainer" containerID="7338e04e481cc22550841996df9b69e9ef5bc356026c077795681bbe56459ac0" Nov 24 10:04:45 crc kubenswrapper[4719]: I1124 10:04:45.850538 4719 generic.go:334] "Generic (PLEG): container finished" podID="9c489706-83cc-4c99-9146-178f1efd5551" containerID="77094dfdaf95315159cf86b077a4317e374e8a1af358532c60d704baeb0ca825" exitCode=0 Nov 24 10:04:45 crc kubenswrapper[4719]: I1124 10:04:45.850621 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9c489706-83cc-4c99-9146-178f1efd5551","Type":"ContainerDied","Data":"77094dfdaf95315159cf86b077a4317e374e8a1af358532c60d704baeb0ca825"} Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.188791 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.387791 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c489706-83cc-4c99-9146-178f1efd5551-test-operator-ephemeral-temporary\") pod \"9c489706-83cc-4c99-9146-178f1efd5551\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.387981 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c489706-83cc-4c99-9146-178f1efd5551-test-operator-ephemeral-workdir\") pod \"9c489706-83cc-4c99-9146-178f1efd5551\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.388244 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-ssh-key\") pod \"9c489706-83cc-4c99-9146-178f1efd5551\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.388329 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-php2d\" (UniqueName: \"kubernetes.io/projected/9c489706-83cc-4c99-9146-178f1efd5551-kube-api-access-php2d\") pod \"9c489706-83cc-4c99-9146-178f1efd5551\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.388419 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-ca-certs\") pod \"9c489706-83cc-4c99-9146-178f1efd5551\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.388471 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"9c489706-83cc-4c99-9146-178f1efd5551\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.389283 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c489706-83cc-4c99-9146-178f1efd5551-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "9c489706-83cc-4c99-9146-178f1efd5551" (UID: "9c489706-83cc-4c99-9146-178f1efd5551"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.394978 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c489706-83cc-4c99-9146-178f1efd5551-kube-api-access-php2d" (OuterVolumeSpecName: "kube-api-access-php2d") pod "9c489706-83cc-4c99-9146-178f1efd5551" (UID: "9c489706-83cc-4c99-9146-178f1efd5551"). InnerVolumeSpecName "kube-api-access-php2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.395326 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "9c489706-83cc-4c99-9146-178f1efd5551" (UID: "9c489706-83cc-4c99-9146-178f1efd5551"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.398227 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c489706-83cc-4c99-9146-178f1efd5551-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "9c489706-83cc-4c99-9146-178f1efd5551" (UID: "9c489706-83cc-4c99-9146-178f1efd5551"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.388575 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c489706-83cc-4c99-9146-178f1efd5551-openstack-config\") pod \"9c489706-83cc-4c99-9146-178f1efd5551\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.398545 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-openstack-config-secret\") pod \"9c489706-83cc-4c99-9146-178f1efd5551\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.398596 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c489706-83cc-4c99-9146-178f1efd5551-config-data\") pod \"9c489706-83cc-4c99-9146-178f1efd5551\" (UID: \"9c489706-83cc-4c99-9146-178f1efd5551\") " Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.399432 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c489706-83cc-4c99-9146-178f1efd5551-config-data" (OuterVolumeSpecName: "config-data") pod "9c489706-83cc-4c99-9146-178f1efd5551" (UID: "9c489706-83cc-4c99-9146-178f1efd5551"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.399678 4719 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c489706-83cc-4c99-9146-178f1efd5551-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.399699 4719 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9c489706-83cc-4c99-9146-178f1efd5551-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.399710 4719 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9c489706-83cc-4c99-9146-178f1efd5551-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.399721 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-php2d\" (UniqueName: \"kubernetes.io/projected/9c489706-83cc-4c99-9146-178f1efd5551-kube-api-access-php2d\") on node \"crc\" DevicePath \"\"" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.400175 4719 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.427566 4719 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.436642 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9c489706-83cc-4c99-9146-178f1efd5551" (UID: "9c489706-83cc-4c99-9146-178f1efd5551"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.439616 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9c489706-83cc-4c99-9146-178f1efd5551" (UID: "9c489706-83cc-4c99-9146-178f1efd5551"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.446305 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "9c489706-83cc-4c99-9146-178f1efd5551" (UID: "9c489706-83cc-4c99-9146-178f1efd5551"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.448299 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c489706-83cc-4c99-9146-178f1efd5551-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9c489706-83cc-4c99-9146-178f1efd5551" (UID: "9c489706-83cc-4c99-9146-178f1efd5551"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.501700 4719 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.501986 4719 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.502085 4719 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9c489706-83cc-4c99-9146-178f1efd5551-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.502144 4719 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.502225 4719 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c489706-83cc-4c99-9146-178f1efd5551-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.884008 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9c489706-83cc-4c99-9146-178f1efd5551","Type":"ContainerDied","Data":"885e7b9d6966b00e20e1f74140617a6099bb48893f70a5a6491a4e15f3a3a4e8"} Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.884464 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="885e7b9d6966b00e20e1f74140617a6099bb48893f70a5a6491a4e15f3a3a4e8" Nov 24 10:04:47 crc kubenswrapper[4719]: I1124 10:04:47.884309 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 10:04:57 crc kubenswrapper[4719]: I1124 10:04:57.830261 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 10:04:57 crc kubenswrapper[4719]: E1124 10:04:57.831068 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec212e4-1383-4039-bd1c-21a718c9adfe" containerName="extract-content" Nov 24 10:04:57 crc kubenswrapper[4719]: I1124 10:04:57.831081 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec212e4-1383-4039-bd1c-21a718c9adfe" containerName="extract-content" Nov 24 10:04:57 crc kubenswrapper[4719]: E1124 10:04:57.831108 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec212e4-1383-4039-bd1c-21a718c9adfe" containerName="extract-utilities" Nov 24 10:04:57 crc kubenswrapper[4719]: I1124 10:04:57.831115 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec212e4-1383-4039-bd1c-21a718c9adfe" containerName="extract-utilities" Nov 24 10:04:57 crc kubenswrapper[4719]: E1124 10:04:57.831132 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c489706-83cc-4c99-9146-178f1efd5551" containerName="tempest-tests-tempest-tests-runner" Nov 24 10:04:57 crc kubenswrapper[4719]: I1124 10:04:57.831139 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c489706-83cc-4c99-9146-178f1efd5551" containerName="tempest-tests-tempest-tests-runner" Nov 24 10:04:57 crc kubenswrapper[4719]: E1124 10:04:57.831157 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec212e4-1383-4039-bd1c-21a718c9adfe" containerName="registry-server" Nov 24 10:04:57 crc kubenswrapper[4719]: I1124 10:04:57.831162 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec212e4-1383-4039-bd1c-21a718c9adfe" containerName="registry-server" Nov 24 10:04:57 crc kubenswrapper[4719]: I1124 10:04:57.831327 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec212e4-1383-4039-bd1c-21a718c9adfe" containerName="registry-server" Nov 24 10:04:57 crc kubenswrapper[4719]: I1124 10:04:57.831345 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c489706-83cc-4c99-9146-178f1efd5551" containerName="tempest-tests-tempest-tests-runner" Nov 24 10:04:57 crc kubenswrapper[4719]: I1124 10:04:57.831988 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 10:04:57 crc kubenswrapper[4719]: I1124 10:04:57.834662 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mmq4t" Nov 24 10:04:57 crc kubenswrapper[4719]: I1124 10:04:57.843769 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 10:04:57 crc kubenswrapper[4719]: I1124 10:04:57.995124 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw2ct\" (UniqueName: \"kubernetes.io/projected/373e0d8e-a16a-4daa-8b4c-895994f91783-kube-api-access-lw2ct\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"373e0d8e-a16a-4daa-8b4c-895994f91783\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 10:04:57 crc kubenswrapper[4719]: I1124 10:04:57.995487 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"373e0d8e-a16a-4daa-8b4c-895994f91783\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 10:04:58 crc kubenswrapper[4719]: I1124 10:04:58.098125 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"373e0d8e-a16a-4daa-8b4c-895994f91783\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 10:04:58 crc kubenswrapper[4719]: I1124 10:04:58.098718 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw2ct\" (UniqueName: \"kubernetes.io/projected/373e0d8e-a16a-4daa-8b4c-895994f91783-kube-api-access-lw2ct\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"373e0d8e-a16a-4daa-8b4c-895994f91783\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 10:04:58 crc kubenswrapper[4719]: I1124 10:04:58.100184 4719 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"373e0d8e-a16a-4daa-8b4c-895994f91783\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 10:04:58 crc kubenswrapper[4719]: I1124 10:04:58.116984 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw2ct\" (UniqueName: \"kubernetes.io/projected/373e0d8e-a16a-4daa-8b4c-895994f91783-kube-api-access-lw2ct\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"373e0d8e-a16a-4daa-8b4c-895994f91783\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 10:04:58 crc kubenswrapper[4719]: I1124 10:04:58.134089 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"373e0d8e-a16a-4daa-8b4c-895994f91783\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 10:04:58 crc kubenswrapper[4719]: I1124 10:04:58.168668 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 10:04:58 crc kubenswrapper[4719]: I1124 10:04:58.597346 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 10:04:58 crc kubenswrapper[4719]: I1124 10:04:58.980064 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"373e0d8e-a16a-4daa-8b4c-895994f91783","Type":"ContainerStarted","Data":"a5da29b182957d2544da0ad393d071b2741fe6e550eba0a2863cda07b32d7d62"} Nov 24 10:04:59 crc kubenswrapper[4719]: I1124 10:04:59.992400 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"373e0d8e-a16a-4daa-8b4c-895994f91783","Type":"ContainerStarted","Data":"e88547a64de8d79450ddb0eec3f026a0ef943f67ece2e86b6ca5c51a4eaff409"} Nov 24 10:05:00 crc kubenswrapper[4719]: I1124 10:05:00.010137 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.951692479 podStartE2EDuration="3.010117199s" podCreationTimestamp="2025-11-24 10:04:57 +0000 UTC" firstStartedPulling="2025-11-24 10:04:58.599203598 +0000 UTC m=+4274.930476870" lastFinishedPulling="2025-11-24 10:04:59.657628298 +0000 UTC m=+4275.988901590" observedRunningTime="2025-11-24 10:05:00.002949396 +0000 UTC m=+4276.334222648" watchObservedRunningTime="2025-11-24 10:05:00.010117199 +0000 UTC m=+4276.341390461" Nov 24 10:05:25 crc kubenswrapper[4719]: I1124 10:05:25.180738 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7ct94/must-gather-bvptt"] Nov 24 10:05:25 crc kubenswrapper[4719]: I1124 10:05:25.182631 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/must-gather-bvptt" Nov 24 10:05:25 crc kubenswrapper[4719]: I1124 10:05:25.186443 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7ct94"/"openshift-service-ca.crt" Nov 24 10:05:25 crc kubenswrapper[4719]: I1124 10:05:25.186601 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7ct94"/"default-dockercfg-8kpr8" Nov 24 10:05:25 crc kubenswrapper[4719]: I1124 10:05:25.186630 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7ct94"/"kube-root-ca.crt" Nov 24 10:05:25 crc kubenswrapper[4719]: I1124 10:05:25.209149 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7ct94/must-gather-bvptt"] Nov 24 10:05:25 crc kubenswrapper[4719]: I1124 10:05:25.309476 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e334aa29-ee2e-42cf-802e-44b527bd837a-must-gather-output\") pod \"must-gather-bvptt\" (UID: \"e334aa29-ee2e-42cf-802e-44b527bd837a\") " pod="openshift-must-gather-7ct94/must-gather-bvptt" Nov 24 10:05:25 crc kubenswrapper[4719]: I1124 10:05:25.309812 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qchg\" (UniqueName: \"kubernetes.io/projected/e334aa29-ee2e-42cf-802e-44b527bd837a-kube-api-access-6qchg\") pod \"must-gather-bvptt\" (UID: \"e334aa29-ee2e-42cf-802e-44b527bd837a\") " pod="openshift-must-gather-7ct94/must-gather-bvptt" Nov 24 10:05:25 crc kubenswrapper[4719]: I1124 10:05:25.411966 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e334aa29-ee2e-42cf-802e-44b527bd837a-must-gather-output\") pod \"must-gather-bvptt\" (UID: \"e334aa29-ee2e-42cf-802e-44b527bd837a\") " pod="openshift-must-gather-7ct94/must-gather-bvptt" Nov 24 10:05:25 crc kubenswrapper[4719]: I1124 10:05:25.412064 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qchg\" (UniqueName: \"kubernetes.io/projected/e334aa29-ee2e-42cf-802e-44b527bd837a-kube-api-access-6qchg\") pod \"must-gather-bvptt\" (UID: \"e334aa29-ee2e-42cf-802e-44b527bd837a\") " pod="openshift-must-gather-7ct94/must-gather-bvptt" Nov 24 10:05:25 crc kubenswrapper[4719]: I1124 10:05:25.412461 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e334aa29-ee2e-42cf-802e-44b527bd837a-must-gather-output\") pod \"must-gather-bvptt\" (UID: \"e334aa29-ee2e-42cf-802e-44b527bd837a\") " pod="openshift-must-gather-7ct94/must-gather-bvptt" Nov 24 10:05:25 crc kubenswrapper[4719]: I1124 10:05:25.430461 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qchg\" (UniqueName: \"kubernetes.io/projected/e334aa29-ee2e-42cf-802e-44b527bd837a-kube-api-access-6qchg\") pod \"must-gather-bvptt\" (UID: \"e334aa29-ee2e-42cf-802e-44b527bd837a\") " pod="openshift-must-gather-7ct94/must-gather-bvptt" Nov 24 10:05:25 crc kubenswrapper[4719]: I1124 10:05:25.503810 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/must-gather-bvptt" Nov 24 10:05:26 crc kubenswrapper[4719]: I1124 10:05:26.280006 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7ct94/must-gather-bvptt"] Nov 24 10:05:27 crc kubenswrapper[4719]: I1124 10:05:27.232242 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7ct94/must-gather-bvptt" event={"ID":"e334aa29-ee2e-42cf-802e-44b527bd837a","Type":"ContainerStarted","Data":"8ab617960640b45286685b86fd77ff0cd2abfc3d5acf0f5b737470ea25f77c45"} Nov 24 10:05:36 crc kubenswrapper[4719]: I1124 10:05:36.342102 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7ct94/must-gather-bvptt" event={"ID":"e334aa29-ee2e-42cf-802e-44b527bd837a","Type":"ContainerStarted","Data":"7775123b7c97752de9901afc5aba1ff5386bbcbf0b6affcd9b5f6605187257a9"} Nov 24 10:05:36 crc kubenswrapper[4719]: I1124 10:05:36.342532 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7ct94/must-gather-bvptt" event={"ID":"e334aa29-ee2e-42cf-802e-44b527bd837a","Type":"ContainerStarted","Data":"536c12f1454b919afbaa74d738ede510cd50daec76f039329621722c68ca62bc"} Nov 24 10:05:36 crc kubenswrapper[4719]: I1124 10:05:36.372131 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7ct94/must-gather-bvptt" podStartSLOduration=2.141544412 podStartE2EDuration="11.372110759s" podCreationTimestamp="2025-11-24 10:05:25 +0000 UTC" firstStartedPulling="2025-11-24 10:05:26.281760278 +0000 UTC m=+4302.613033530" lastFinishedPulling="2025-11-24 10:05:35.512326625 +0000 UTC m=+4311.843599877" observedRunningTime="2025-11-24 10:05:36.357769572 +0000 UTC m=+4312.689042834" watchObservedRunningTime="2025-11-24 10:05:36.372110759 +0000 UTC m=+4312.703384021" Nov 24 10:05:41 crc kubenswrapper[4719]: I1124 10:05:41.630895 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7ct94/crc-debug-7f286"] Nov 24 10:05:41 crc kubenswrapper[4719]: I1124 10:05:41.632793 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/crc-debug-7f286" Nov 24 10:05:41 crc kubenswrapper[4719]: I1124 10:05:41.763849 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsvr5\" (UniqueName: \"kubernetes.io/projected/5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68-kube-api-access-xsvr5\") pod \"crc-debug-7f286\" (UID: \"5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68\") " pod="openshift-must-gather-7ct94/crc-debug-7f286" Nov 24 10:05:41 crc kubenswrapper[4719]: I1124 10:05:41.764164 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68-host\") pod \"crc-debug-7f286\" (UID: \"5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68\") " pod="openshift-must-gather-7ct94/crc-debug-7f286" Nov 24 10:05:41 crc kubenswrapper[4719]: I1124 10:05:41.866351 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsvr5\" (UniqueName: \"kubernetes.io/projected/5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68-kube-api-access-xsvr5\") pod \"crc-debug-7f286\" (UID: \"5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68\") " pod="openshift-must-gather-7ct94/crc-debug-7f286" Nov 24 10:05:41 crc kubenswrapper[4719]: I1124 10:05:41.866474 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68-host\") pod \"crc-debug-7f286\" (UID: \"5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68\") " pod="openshift-must-gather-7ct94/crc-debug-7f286" Nov 24 10:05:41 crc kubenswrapper[4719]: I1124 10:05:41.866623 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68-host\") pod \"crc-debug-7f286\" (UID: \"5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68\") " pod="openshift-must-gather-7ct94/crc-debug-7f286" Nov 24 10:05:41 crc kubenswrapper[4719]: I1124 10:05:41.892849 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsvr5\" (UniqueName: \"kubernetes.io/projected/5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68-kube-api-access-xsvr5\") pod \"crc-debug-7f286\" (UID: \"5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68\") " pod="openshift-must-gather-7ct94/crc-debug-7f286" Nov 24 10:05:41 crc kubenswrapper[4719]: I1124 10:05:41.958228 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/crc-debug-7f286" Nov 24 10:05:42 crc kubenswrapper[4719]: I1124 10:05:42.393304 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7ct94/crc-debug-7f286" event={"ID":"5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68","Type":"ContainerStarted","Data":"eeb973715867869fdd1efdf68eac7cf2fad9c8cc95b268a3e405ad2544c599d3"} Nov 24 10:05:45 crc kubenswrapper[4719]: I1124 10:05:45.065615 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-24vr4"] Nov 24 10:05:45 crc kubenswrapper[4719]: I1124 10:05:45.073533 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:05:45 crc kubenswrapper[4719]: I1124 10:05:45.096013 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-24vr4"] Nov 24 10:05:45 crc kubenswrapper[4719]: I1124 10:05:45.154313 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrbjs\" (UniqueName: \"kubernetes.io/projected/5c7e15a0-c42f-4f39-bade-c17346e5eb70-kube-api-access-zrbjs\") pod \"redhat-marketplace-24vr4\" (UID: \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\") " pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:05:45 crc kubenswrapper[4719]: I1124 10:05:45.154506 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7e15a0-c42f-4f39-bade-c17346e5eb70-catalog-content\") pod \"redhat-marketplace-24vr4\" (UID: \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\") " pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:05:45 crc kubenswrapper[4719]: I1124 10:05:45.154579 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7e15a0-c42f-4f39-bade-c17346e5eb70-utilities\") pod \"redhat-marketplace-24vr4\" (UID: \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\") " pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:05:45 crc kubenswrapper[4719]: I1124 10:05:45.261567 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrbjs\" (UniqueName: \"kubernetes.io/projected/5c7e15a0-c42f-4f39-bade-c17346e5eb70-kube-api-access-zrbjs\") pod \"redhat-marketplace-24vr4\" (UID: \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\") " pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:05:45 crc kubenswrapper[4719]: I1124 10:05:45.261659 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7e15a0-c42f-4f39-bade-c17346e5eb70-catalog-content\") pod \"redhat-marketplace-24vr4\" (UID: \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\") " pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:05:45 crc kubenswrapper[4719]: I1124 10:05:45.261702 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7e15a0-c42f-4f39-bade-c17346e5eb70-utilities\") pod \"redhat-marketplace-24vr4\" (UID: \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\") " pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:05:45 crc kubenswrapper[4719]: I1124 10:05:45.262176 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7e15a0-c42f-4f39-bade-c17346e5eb70-utilities\") pod \"redhat-marketplace-24vr4\" (UID: \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\") " pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:05:45 crc kubenswrapper[4719]: I1124 10:05:45.262470 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7e15a0-c42f-4f39-bade-c17346e5eb70-catalog-content\") pod \"redhat-marketplace-24vr4\" (UID: \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\") " pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:05:45 crc kubenswrapper[4719]: I1124 10:05:45.284389 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrbjs\" (UniqueName: \"kubernetes.io/projected/5c7e15a0-c42f-4f39-bade-c17346e5eb70-kube-api-access-zrbjs\") pod \"redhat-marketplace-24vr4\" (UID: \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\") " pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:05:45 crc kubenswrapper[4719]: I1124 10:05:45.392960 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:05:45 crc kubenswrapper[4719]: I1124 10:05:45.968079 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-24vr4"] Nov 24 10:05:45 crc kubenswrapper[4719]: W1124 10:05:45.986514 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c7e15a0_c42f_4f39_bade_c17346e5eb70.slice/crio-556e9822329aad72e58118d949840bfe0440c04639f7a015b20f655fd197d356 WatchSource:0}: Error finding container 556e9822329aad72e58118d949840bfe0440c04639f7a015b20f655fd197d356: Status 404 returned error can't find the container with id 556e9822329aad72e58118d949840bfe0440c04639f7a015b20f655fd197d356 Nov 24 10:05:46 crc kubenswrapper[4719]: I1124 10:05:46.436407 4719 generic.go:334] "Generic (PLEG): container finished" podID="5c7e15a0-c42f-4f39-bade-c17346e5eb70" containerID="45ad1a6c12f25d45f9d21ebf9e1f139bf5b7f6d6d0ed0dac2baa2df44f5d0ff7" exitCode=0 Nov 24 10:05:46 crc kubenswrapper[4719]: I1124 10:05:46.436611 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24vr4" event={"ID":"5c7e15a0-c42f-4f39-bade-c17346e5eb70","Type":"ContainerDied","Data":"45ad1a6c12f25d45f9d21ebf9e1f139bf5b7f6d6d0ed0dac2baa2df44f5d0ff7"} Nov 24 10:05:46 crc kubenswrapper[4719]: I1124 10:05:46.437787 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24vr4" event={"ID":"5c7e15a0-c42f-4f39-bade-c17346e5eb70","Type":"ContainerStarted","Data":"556e9822329aad72e58118d949840bfe0440c04639f7a015b20f655fd197d356"} Nov 24 10:05:47 crc kubenswrapper[4719]: I1124 10:05:47.463183 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24vr4" event={"ID":"5c7e15a0-c42f-4f39-bade-c17346e5eb70","Type":"ContainerStarted","Data":"e4e25c70e5a78c860638732b710a88b05c2fab3979af8f7268524b820b5a3440"} Nov 24 10:05:48 crc kubenswrapper[4719]: I1124 10:05:48.475435 4719 generic.go:334] "Generic (PLEG): container finished" podID="5c7e15a0-c42f-4f39-bade-c17346e5eb70" containerID="e4e25c70e5a78c860638732b710a88b05c2fab3979af8f7268524b820b5a3440" exitCode=0 Nov 24 10:05:48 crc kubenswrapper[4719]: I1124 10:05:48.475479 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24vr4" event={"ID":"5c7e15a0-c42f-4f39-bade-c17346e5eb70","Type":"ContainerDied","Data":"e4e25c70e5a78c860638732b710a88b05c2fab3979af8f7268524b820b5a3440"} Nov 24 10:05:54 crc kubenswrapper[4719]: I1124 10:05:54.934940 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 10:05:55 crc kubenswrapper[4719]: I1124 10:05:55.561477 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7ct94/crc-debug-7f286" event={"ID":"5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68","Type":"ContainerStarted","Data":"409600c6505e0921c03d23d81dfff32a3c100d2b6909989280fa4524868c59d0"} Nov 24 10:05:55 crc kubenswrapper[4719]: I1124 10:05:55.575709 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7ct94/crc-debug-7f286" podStartSLOduration=1.54812103 podStartE2EDuration="14.575693958s" podCreationTimestamp="2025-11-24 10:05:41 +0000 UTC" firstStartedPulling="2025-11-24 10:05:41.99334467 +0000 UTC m=+4318.324617912" lastFinishedPulling="2025-11-24 10:05:55.020917588 +0000 UTC m=+4331.352190840" observedRunningTime="2025-11-24 10:05:55.574637908 +0000 UTC m=+4331.905911150" watchObservedRunningTime="2025-11-24 10:05:55.575693958 +0000 UTC m=+4331.906967220" Nov 24 10:05:56 crc kubenswrapper[4719]: I1124 10:05:56.583157 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24vr4" event={"ID":"5c7e15a0-c42f-4f39-bade-c17346e5eb70","Type":"ContainerStarted","Data":"995870f2ba87768d912034499ef5eff3526053ff9023663c41b71e66bea87e6d"} Nov 24 10:05:56 crc kubenswrapper[4719]: I1124 10:05:56.618327 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-24vr4" podStartSLOduration=2.622943946 podStartE2EDuration="11.61828885s" podCreationTimestamp="2025-11-24 10:05:45 +0000 UTC" firstStartedPulling="2025-11-24 10:05:46.438748847 +0000 UTC m=+4322.770022089" lastFinishedPulling="2025-11-24 10:05:55.434093731 +0000 UTC m=+4331.765366993" observedRunningTime="2025-11-24 10:05:56.609527341 +0000 UTC m=+4332.940800593" watchObservedRunningTime="2025-11-24 10:05:56.61828885 +0000 UTC m=+4332.949562102" Nov 24 10:06:02 crc kubenswrapper[4719]: I1124 10:06:02.357429 4719 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.211323979s: [/var/lib/containers/storage/overlay/6c103576f2b20fd303079e9c6aaeb42d50cafb5a678fbe578e93a40148b6c97c/diff /var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-r2r85_a0a59a11-1bf3-4ff8-8496-9414bc0ae549/manager/0.log]; will not log again for this container unless duration exceeds 2s Nov 24 10:06:02 crc kubenswrapper[4719]: I1124 10:06:02.356523 4719 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.332390144s: [/var/lib/containers/storage/overlay/c359af02e4299e0a1779c9dc4492f4c52a4b4b80f40887c5769f252c07accdcf/diff /var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-xkfjt_5a2058d2-1589-484e-a5a1-de7e31af1a63/manager/0.log]; will not log again for this container unless duration exceeds 2s Nov 24 10:06:02 crc kubenswrapper[4719]: I1124 10:06:02.361646 4719 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.32627678s: [/var/lib/containers/storage/overlay/b0ad7fce15f9a9250e67a397cc3ac690c9dd1046c1cf92f681675dfe7e33c129/diff /var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-lthw6_30241c11-005e-4410-ad1a-71d6c5c0910f/manager/0.log]; will not log again for this container unless duration exceeds 2s Nov 24 10:06:05 crc kubenswrapper[4719]: I1124 10:06:05.393194 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:06:05 crc kubenswrapper[4719]: I1124 10:06:05.393759 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:06:05 crc kubenswrapper[4719]: I1124 10:06:05.451098 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:06:05 crc kubenswrapper[4719]: I1124 10:06:05.719266 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:06:05 crc kubenswrapper[4719]: I1124 10:06:05.785746 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-24vr4"] Nov 24 10:06:07 crc kubenswrapper[4719]: I1124 10:06:07.678546 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-24vr4" podUID="5c7e15a0-c42f-4f39-bade-c17346e5eb70" containerName="registry-server" containerID="cri-o://995870f2ba87768d912034499ef5eff3526053ff9023663c41b71e66bea87e6d" gracePeriod=2 Nov 24 10:06:08 crc kubenswrapper[4719]: I1124 10:06:08.690547 4719 generic.go:334] "Generic (PLEG): container finished" podID="5c7e15a0-c42f-4f39-bade-c17346e5eb70" containerID="995870f2ba87768d912034499ef5eff3526053ff9023663c41b71e66bea87e6d" exitCode=0 Nov 24 10:06:08 crc kubenswrapper[4719]: I1124 10:06:08.690687 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24vr4" event={"ID":"5c7e15a0-c42f-4f39-bade-c17346e5eb70","Type":"ContainerDied","Data":"995870f2ba87768d912034499ef5eff3526053ff9023663c41b71e66bea87e6d"} Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.013390 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.112758 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrbjs\" (UniqueName: \"kubernetes.io/projected/5c7e15a0-c42f-4f39-bade-c17346e5eb70-kube-api-access-zrbjs\") pod \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\" (UID: \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\") " Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.112810 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7e15a0-c42f-4f39-bade-c17346e5eb70-catalog-content\") pod \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\" (UID: \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\") " Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.112893 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7e15a0-c42f-4f39-bade-c17346e5eb70-utilities\") pod \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\" (UID: \"5c7e15a0-c42f-4f39-bade-c17346e5eb70\") " Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.114643 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c7e15a0-c42f-4f39-bade-c17346e5eb70-utilities" (OuterVolumeSpecName: "utilities") pod "5c7e15a0-c42f-4f39-bade-c17346e5eb70" (UID: "5c7e15a0-c42f-4f39-bade-c17346e5eb70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.128350 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7e15a0-c42f-4f39-bade-c17346e5eb70-kube-api-access-zrbjs" (OuterVolumeSpecName: "kube-api-access-zrbjs") pod "5c7e15a0-c42f-4f39-bade-c17346e5eb70" (UID: "5c7e15a0-c42f-4f39-bade-c17346e5eb70"). InnerVolumeSpecName "kube-api-access-zrbjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.135738 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c7e15a0-c42f-4f39-bade-c17346e5eb70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c7e15a0-c42f-4f39-bade-c17346e5eb70" (UID: "5c7e15a0-c42f-4f39-bade-c17346e5eb70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.215648 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrbjs\" (UniqueName: \"kubernetes.io/projected/5c7e15a0-c42f-4f39-bade-c17346e5eb70-kube-api-access-zrbjs\") on node \"crc\" DevicePath \"\"" Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.215692 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7e15a0-c42f-4f39-bade-c17346e5eb70-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.215704 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7e15a0-c42f-4f39-bade-c17346e5eb70-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.701767 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24vr4" event={"ID":"5c7e15a0-c42f-4f39-bade-c17346e5eb70","Type":"ContainerDied","Data":"556e9822329aad72e58118d949840bfe0440c04639f7a015b20f655fd197d356"} Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.701832 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24vr4" Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.701856 4719 scope.go:117] "RemoveContainer" containerID="995870f2ba87768d912034499ef5eff3526053ff9023663c41b71e66bea87e6d" Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.744490 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-24vr4"] Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.744663 4719 scope.go:117] "RemoveContainer" containerID="e4e25c70e5a78c860638732b710a88b05c2fab3979af8f7268524b820b5a3440" Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.757017 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-24vr4"] Nov 24 10:06:09 crc kubenswrapper[4719]: I1124 10:06:09.796523 4719 scope.go:117] "RemoveContainer" containerID="45ad1a6c12f25d45f9d21ebf9e1f139bf5b7f6d6d0ed0dac2baa2df44f5d0ff7" Nov 24 10:06:10 crc kubenswrapper[4719]: I1124 10:06:10.533247 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7e15a0-c42f-4f39-bade-c17346e5eb70" path="/var/lib/kubelet/pods/5c7e15a0-c42f-4f39-bade-c17346e5eb70/volumes" Nov 24 10:06:34 crc kubenswrapper[4719]: I1124 10:06:34.562321 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:06:34 crc kubenswrapper[4719]: I1124 10:06:34.562850 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:06:42 crc kubenswrapper[4719]: I1124 10:06:42.994951 4719 generic.go:334] "Generic (PLEG): container finished" podID="5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68" containerID="409600c6505e0921c03d23d81dfff32a3c100d2b6909989280fa4524868c59d0" exitCode=0 Nov 24 10:06:42 crc kubenswrapper[4719]: I1124 10:06:42.995017 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7ct94/crc-debug-7f286" event={"ID":"5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68","Type":"ContainerDied","Data":"409600c6505e0921c03d23d81dfff32a3c100d2b6909989280fa4524868c59d0"} Nov 24 10:06:44 crc kubenswrapper[4719]: I1124 10:06:44.706840 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/crc-debug-7f286" Nov 24 10:06:44 crc kubenswrapper[4719]: I1124 10:06:44.745422 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7ct94/crc-debug-7f286"] Nov 24 10:06:44 crc kubenswrapper[4719]: I1124 10:06:44.757114 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7ct94/crc-debug-7f286"] Nov 24 10:06:44 crc kubenswrapper[4719]: I1124 10:06:44.855434 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsvr5\" (UniqueName: \"kubernetes.io/projected/5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68-kube-api-access-xsvr5\") pod \"5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68\" (UID: \"5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68\") " Nov 24 10:06:44 crc kubenswrapper[4719]: I1124 10:06:44.855650 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68-host\") pod \"5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68\" (UID: \"5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68\") " Nov 24 10:06:44 crc kubenswrapper[4719]: I1124 10:06:44.856083 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68-host" (OuterVolumeSpecName: "host") pod "5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68" (UID: "5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 10:06:44 crc kubenswrapper[4719]: I1124 10:06:44.856189 4719 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68-host\") on node \"crc\" DevicePath \"\"" Nov 24 10:06:44 crc kubenswrapper[4719]: I1124 10:06:44.861030 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68-kube-api-access-xsvr5" (OuterVolumeSpecName: "kube-api-access-xsvr5") pod "5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68" (UID: "5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68"). InnerVolumeSpecName "kube-api-access-xsvr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:06:44 crc kubenswrapper[4719]: I1124 10:06:44.961474 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsvr5\" (UniqueName: \"kubernetes.io/projected/5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68-kube-api-access-xsvr5\") on node \"crc\" DevicePath \"\"" Nov 24 10:06:45 crc kubenswrapper[4719]: I1124 10:06:45.019871 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeb973715867869fdd1efdf68eac7cf2fad9c8cc95b268a3e405ad2544c599d3" Nov 24 10:06:45 crc kubenswrapper[4719]: I1124 10:06:45.019949 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/crc-debug-7f286" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.033563 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7ct94/crc-debug-bxrdq"] Nov 24 10:06:46 crc kubenswrapper[4719]: E1124 10:06:46.034336 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7e15a0-c42f-4f39-bade-c17346e5eb70" containerName="extract-utilities" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.034351 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7e15a0-c42f-4f39-bade-c17346e5eb70" containerName="extract-utilities" Nov 24 10:06:46 crc kubenswrapper[4719]: E1124 10:06:46.034366 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68" containerName="container-00" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.034374 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68" containerName="container-00" Nov 24 10:06:46 crc kubenswrapper[4719]: E1124 10:06:46.034383 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7e15a0-c42f-4f39-bade-c17346e5eb70" containerName="extract-content" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.034389 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7e15a0-c42f-4f39-bade-c17346e5eb70" containerName="extract-content" Nov 24 10:06:46 crc kubenswrapper[4719]: E1124 10:06:46.034407 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7e15a0-c42f-4f39-bade-c17346e5eb70" containerName="registry-server" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.034412 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7e15a0-c42f-4f39-bade-c17346e5eb70" containerName="registry-server" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.034582 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68" containerName="container-00" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.034603 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7e15a0-c42f-4f39-bade-c17346e5eb70" containerName="registry-server" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.035229 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/crc-debug-bxrdq" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.184411 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7w2b\" (UniqueName: \"kubernetes.io/projected/0a7c209b-f499-4e95-8f00-0a4119cd020a-kube-api-access-f7w2b\") pod \"crc-debug-bxrdq\" (UID: \"0a7c209b-f499-4e95-8f00-0a4119cd020a\") " pod="openshift-must-gather-7ct94/crc-debug-bxrdq" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.184634 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a7c209b-f499-4e95-8f00-0a4119cd020a-host\") pod \"crc-debug-bxrdq\" (UID: \"0a7c209b-f499-4e95-8f00-0a4119cd020a\") " pod="openshift-must-gather-7ct94/crc-debug-bxrdq" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.286272 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7w2b\" (UniqueName: \"kubernetes.io/projected/0a7c209b-f499-4e95-8f00-0a4119cd020a-kube-api-access-f7w2b\") pod \"crc-debug-bxrdq\" (UID: \"0a7c209b-f499-4e95-8f00-0a4119cd020a\") " pod="openshift-must-gather-7ct94/crc-debug-bxrdq" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.286432 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a7c209b-f499-4e95-8f00-0a4119cd020a-host\") pod \"crc-debug-bxrdq\" (UID: \"0a7c209b-f499-4e95-8f00-0a4119cd020a\") " pod="openshift-must-gather-7ct94/crc-debug-bxrdq" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.286647 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a7c209b-f499-4e95-8f00-0a4119cd020a-host\") pod \"crc-debug-bxrdq\" (UID: \"0a7c209b-f499-4e95-8f00-0a4119cd020a\") " pod="openshift-must-gather-7ct94/crc-debug-bxrdq" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.320073 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7w2b\" (UniqueName: \"kubernetes.io/projected/0a7c209b-f499-4e95-8f00-0a4119cd020a-kube-api-access-f7w2b\") pod \"crc-debug-bxrdq\" (UID: \"0a7c209b-f499-4e95-8f00-0a4119cd020a\") " pod="openshift-must-gather-7ct94/crc-debug-bxrdq" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.351151 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/crc-debug-bxrdq" Nov 24 10:06:46 crc kubenswrapper[4719]: I1124 10:06:46.537490 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68" path="/var/lib/kubelet/pods/5a1b8cd5-5e82-40f6-8751-5baf5b4a9b68/volumes" Nov 24 10:06:47 crc kubenswrapper[4719]: I1124 10:06:47.034947 4719 generic.go:334] "Generic (PLEG): container finished" podID="0a7c209b-f499-4e95-8f00-0a4119cd020a" containerID="36269954bc31d41bf1978d090c19c5a211ebfb5508eee79a9c119fd4d9f33edf" exitCode=0 Nov 24 10:06:47 crc kubenswrapper[4719]: I1124 10:06:47.035027 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7ct94/crc-debug-bxrdq" event={"ID":"0a7c209b-f499-4e95-8f00-0a4119cd020a","Type":"ContainerDied","Data":"36269954bc31d41bf1978d090c19c5a211ebfb5508eee79a9c119fd4d9f33edf"} Nov 24 10:06:47 crc kubenswrapper[4719]: I1124 10:06:47.035239 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7ct94/crc-debug-bxrdq" event={"ID":"0a7c209b-f499-4e95-8f00-0a4119cd020a","Type":"ContainerStarted","Data":"c4757689462144b147059e8b6b239ad89d79af3441921c4e8424af48405c3476"} Nov 24 10:06:47 crc kubenswrapper[4719]: I1124 10:06:47.441485 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7ct94/crc-debug-bxrdq"] Nov 24 10:06:47 crc kubenswrapper[4719]: I1124 10:06:47.452517 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7ct94/crc-debug-bxrdq"] Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.144081 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/crc-debug-bxrdq" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.321051 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7w2b\" (UniqueName: \"kubernetes.io/projected/0a7c209b-f499-4e95-8f00-0a4119cd020a-kube-api-access-f7w2b\") pod \"0a7c209b-f499-4e95-8f00-0a4119cd020a\" (UID: \"0a7c209b-f499-4e95-8f00-0a4119cd020a\") " Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.321334 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a7c209b-f499-4e95-8f00-0a4119cd020a-host\") pod \"0a7c209b-f499-4e95-8f00-0a4119cd020a\" (UID: \"0a7c209b-f499-4e95-8f00-0a4119cd020a\") " Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.321810 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a7c209b-f499-4e95-8f00-0a4119cd020a-host" (OuterVolumeSpecName: "host") pod "0a7c209b-f499-4e95-8f00-0a4119cd020a" (UID: "0a7c209b-f499-4e95-8f00-0a4119cd020a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.336504 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7c209b-f499-4e95-8f00-0a4119cd020a-kube-api-access-f7w2b" (OuterVolumeSpecName: "kube-api-access-f7w2b") pod "0a7c209b-f499-4e95-8f00-0a4119cd020a" (UID: "0a7c209b-f499-4e95-8f00-0a4119cd020a"). InnerVolumeSpecName "kube-api-access-f7w2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.423604 4719 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a7c209b-f499-4e95-8f00-0a4119cd020a-host\") on node \"crc\" DevicePath \"\"" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.423647 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7w2b\" (UniqueName: \"kubernetes.io/projected/0a7c209b-f499-4e95-8f00-0a4119cd020a-kube-api-access-f7w2b\") on node \"crc\" DevicePath \"\"" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.530896 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7c209b-f499-4e95-8f00-0a4119cd020a" path="/var/lib/kubelet/pods/0a7c209b-f499-4e95-8f00-0a4119cd020a/volumes" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.639154 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7ct94/crc-debug-bd9pq"] Nov 24 10:06:48 crc kubenswrapper[4719]: E1124 10:06:48.639556 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7c209b-f499-4e95-8f00-0a4119cd020a" containerName="container-00" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.639574 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7c209b-f499-4e95-8f00-0a4119cd020a" containerName="container-00" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.639780 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7c209b-f499-4e95-8f00-0a4119cd020a" containerName="container-00" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.640385 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/crc-debug-bd9pq" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.731541 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsrnj\" (UniqueName: \"kubernetes.io/projected/27386b4c-fd74-4b04-89af-c8e23cfa6c9a-kube-api-access-hsrnj\") pod \"crc-debug-bd9pq\" (UID: \"27386b4c-fd74-4b04-89af-c8e23cfa6c9a\") " pod="openshift-must-gather-7ct94/crc-debug-bd9pq" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.731655 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27386b4c-fd74-4b04-89af-c8e23cfa6c9a-host\") pod \"crc-debug-bd9pq\" (UID: \"27386b4c-fd74-4b04-89af-c8e23cfa6c9a\") " pod="openshift-must-gather-7ct94/crc-debug-bd9pq" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.833838 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsrnj\" (UniqueName: \"kubernetes.io/projected/27386b4c-fd74-4b04-89af-c8e23cfa6c9a-kube-api-access-hsrnj\") pod \"crc-debug-bd9pq\" (UID: \"27386b4c-fd74-4b04-89af-c8e23cfa6c9a\") " pod="openshift-must-gather-7ct94/crc-debug-bd9pq" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.833960 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27386b4c-fd74-4b04-89af-c8e23cfa6c9a-host\") pod \"crc-debug-bd9pq\" (UID: \"27386b4c-fd74-4b04-89af-c8e23cfa6c9a\") " pod="openshift-must-gather-7ct94/crc-debug-bd9pq" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.834162 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27386b4c-fd74-4b04-89af-c8e23cfa6c9a-host\") pod \"crc-debug-bd9pq\" (UID: \"27386b4c-fd74-4b04-89af-c8e23cfa6c9a\") " pod="openshift-must-gather-7ct94/crc-debug-bd9pq" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.855697 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsrnj\" (UniqueName: \"kubernetes.io/projected/27386b4c-fd74-4b04-89af-c8e23cfa6c9a-kube-api-access-hsrnj\") pod \"crc-debug-bd9pq\" (UID: \"27386b4c-fd74-4b04-89af-c8e23cfa6c9a\") " pod="openshift-must-gather-7ct94/crc-debug-bd9pq" Nov 24 10:06:48 crc kubenswrapper[4719]: I1124 10:06:48.953830 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/crc-debug-bd9pq" Nov 24 10:06:49 crc kubenswrapper[4719]: I1124 10:06:49.051331 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7ct94/crc-debug-bd9pq" event={"ID":"27386b4c-fd74-4b04-89af-c8e23cfa6c9a","Type":"ContainerStarted","Data":"f6f0f3a276ab4737377da92a949e02f28309d61c61460b940daab7302bd215c5"} Nov 24 10:06:49 crc kubenswrapper[4719]: I1124 10:06:49.052830 4719 scope.go:117] "RemoveContainer" containerID="36269954bc31d41bf1978d090c19c5a211ebfb5508eee79a9c119fd4d9f33edf" Nov 24 10:06:49 crc kubenswrapper[4719]: I1124 10:06:49.052949 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/crc-debug-bxrdq" Nov 24 10:06:50 crc kubenswrapper[4719]: I1124 10:06:50.063486 4719 generic.go:334] "Generic (PLEG): container finished" podID="27386b4c-fd74-4b04-89af-c8e23cfa6c9a" containerID="7ba16eeac0ee1ad27ffb9d56366e80d654892963bcbb26fcb20a8770919b692f" exitCode=0 Nov 24 10:06:50 crc kubenswrapper[4719]: I1124 10:06:50.063701 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7ct94/crc-debug-bd9pq" event={"ID":"27386b4c-fd74-4b04-89af-c8e23cfa6c9a","Type":"ContainerDied","Data":"7ba16eeac0ee1ad27ffb9d56366e80d654892963bcbb26fcb20a8770919b692f"} Nov 24 10:06:50 crc kubenswrapper[4719]: I1124 10:06:50.107237 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7ct94/crc-debug-bd9pq"] Nov 24 10:06:50 crc kubenswrapper[4719]: I1124 10:06:50.118686 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7ct94/crc-debug-bd9pq"] Nov 24 10:06:51 crc kubenswrapper[4719]: I1124 10:06:51.208085 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/crc-debug-bd9pq" Nov 24 10:06:51 crc kubenswrapper[4719]: I1124 10:06:51.275648 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsrnj\" (UniqueName: \"kubernetes.io/projected/27386b4c-fd74-4b04-89af-c8e23cfa6c9a-kube-api-access-hsrnj\") pod \"27386b4c-fd74-4b04-89af-c8e23cfa6c9a\" (UID: \"27386b4c-fd74-4b04-89af-c8e23cfa6c9a\") " Nov 24 10:06:51 crc kubenswrapper[4719]: I1124 10:06:51.275874 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27386b4c-fd74-4b04-89af-c8e23cfa6c9a-host\") pod \"27386b4c-fd74-4b04-89af-c8e23cfa6c9a\" (UID: \"27386b4c-fd74-4b04-89af-c8e23cfa6c9a\") " Nov 24 10:06:51 crc kubenswrapper[4719]: I1124 10:06:51.276413 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27386b4c-fd74-4b04-89af-c8e23cfa6c9a-host" (OuterVolumeSpecName: "host") pod "27386b4c-fd74-4b04-89af-c8e23cfa6c9a" (UID: "27386b4c-fd74-4b04-89af-c8e23cfa6c9a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 10:06:51 crc kubenswrapper[4719]: I1124 10:06:51.295338 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27386b4c-fd74-4b04-89af-c8e23cfa6c9a-kube-api-access-hsrnj" (OuterVolumeSpecName: "kube-api-access-hsrnj") pod "27386b4c-fd74-4b04-89af-c8e23cfa6c9a" (UID: "27386b4c-fd74-4b04-89af-c8e23cfa6c9a"). InnerVolumeSpecName "kube-api-access-hsrnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:06:51 crc kubenswrapper[4719]: I1124 10:06:51.377939 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsrnj\" (UniqueName: \"kubernetes.io/projected/27386b4c-fd74-4b04-89af-c8e23cfa6c9a-kube-api-access-hsrnj\") on node \"crc\" DevicePath \"\"" Nov 24 10:06:51 crc kubenswrapper[4719]: I1124 10:06:51.377982 4719 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27386b4c-fd74-4b04-89af-c8e23cfa6c9a-host\") on node \"crc\" DevicePath \"\"" Nov 24 10:06:52 crc kubenswrapper[4719]: I1124 10:06:52.116378 4719 scope.go:117] "RemoveContainer" containerID="7ba16eeac0ee1ad27ffb9d56366e80d654892963bcbb26fcb20a8770919b692f" Nov 24 10:06:52 crc kubenswrapper[4719]: I1124 10:06:52.116796 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/crc-debug-bd9pq" Nov 24 10:06:52 crc kubenswrapper[4719]: I1124 10:06:52.532524 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27386b4c-fd74-4b04-89af-c8e23cfa6c9a" path="/var/lib/kubelet/pods/27386b4c-fd74-4b04-89af-c8e23cfa6c9a/volumes" Nov 24 10:07:04 crc kubenswrapper[4719]: I1124 10:07:04.562253 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:07:04 crc kubenswrapper[4719]: I1124 10:07:04.562672 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:07:34 crc kubenswrapper[4719]: I1124 10:07:34.562322 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:07:34 crc kubenswrapper[4719]: I1124 10:07:34.562885 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:07:34 crc kubenswrapper[4719]: I1124 10:07:34.562952 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 10:07:34 crc kubenswrapper[4719]: I1124 10:07:34.563715 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 10:07:34 crc kubenswrapper[4719]: I1124 10:07:34.563766 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" gracePeriod=600 Nov 24 10:07:34 crc kubenswrapper[4719]: E1124 10:07:34.700560 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:07:35 crc kubenswrapper[4719]: I1124 10:07:35.697468 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" exitCode=0 Nov 24 10:07:35 crc kubenswrapper[4719]: I1124 10:07:35.697514 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de"} Nov 24 10:07:35 crc kubenswrapper[4719]: I1124 10:07:35.697742 4719 scope.go:117] "RemoveContainer" containerID="22a40b292aa2c73b1bc4ad790f908be2ce33655290a1fd793eac90657829c15d" Nov 24 10:07:35 crc kubenswrapper[4719]: I1124 10:07:35.698437 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:07:35 crc kubenswrapper[4719]: E1124 10:07:35.698705 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:07:40 crc kubenswrapper[4719]: I1124 10:07:40.214799 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c84b4b586-mwtc8_390c94ff-225b-448b-963d-9b8cb729963a/barbican-api/0.log" Nov 24 10:07:40 crc kubenswrapper[4719]: I1124 10:07:40.412100 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-68fd59f556-bvd2x_6feeb8da-45f5-4eb9-bae3-5101afc7e021/barbican-keystone-listener/0.log" Nov 24 10:07:40 crc kubenswrapper[4719]: I1124 10:07:40.441754 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c84b4b586-mwtc8_390c94ff-225b-448b-963d-9b8cb729963a/barbican-api-log/0.log" Nov 24 10:07:40 crc kubenswrapper[4719]: I1124 10:07:40.501283 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-68fd59f556-bvd2x_6feeb8da-45f5-4eb9-bae3-5101afc7e021/barbican-keystone-listener-log/0.log" Nov 24 10:07:40 crc kubenswrapper[4719]: I1124 10:07:40.675002 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55fc6d8c7-9576d_9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb/barbican-worker/0.log" Nov 24 10:07:40 crc kubenswrapper[4719]: I1124 10:07:40.707749 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55fc6d8c7-9576d_9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb/barbican-worker-log/0.log" Nov 24 10:07:40 crc kubenswrapper[4719]: I1124 10:07:40.883634 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx_2825c32a-3ceb-4ba8-a522-554244ca93dd/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:07:41 crc kubenswrapper[4719]: I1124 10:07:41.035432 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd478071-4e9d-402f-afa7-fbd28f489095/ceilometer-central-agent/0.log" Nov 24 10:07:41 crc kubenswrapper[4719]: I1124 10:07:41.129788 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd478071-4e9d-402f-afa7-fbd28f489095/ceilometer-notification-agent/0.log" Nov 24 10:07:41 crc kubenswrapper[4719]: I1124 10:07:41.216940 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd478071-4e9d-402f-afa7-fbd28f489095/proxy-httpd/0.log" Nov 24 10:07:41 crc kubenswrapper[4719]: I1124 10:07:41.238442 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd478071-4e9d-402f-afa7-fbd28f489095/sg-core/0.log" Nov 24 10:07:41 crc kubenswrapper[4719]: I1124 10:07:41.415098 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr_1dad4f07-729f-4a99-bc32-62f666007c12/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:07:41 crc kubenswrapper[4719]: I1124 10:07:41.462267 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq_6d07d001-6f91-4b09-9897-01f55286e015/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:07:41 crc kubenswrapper[4719]: I1124 10:07:41.677800 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ee147176-e4d4-4f7c-a73b-aa861bc83f31/cinder-api/0.log" Nov 24 10:07:41 crc kubenswrapper[4719]: I1124 10:07:41.746585 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ee147176-e4d4-4f7c-a73b-aa861bc83f31/cinder-api-log/0.log" Nov 24 10:07:41 crc kubenswrapper[4719]: I1124 10:07:41.933135 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_9d9e3bfc-9c58-4534-89f9-72f35c264a80/cinder-backup/0.log" Nov 24 10:07:41 crc kubenswrapper[4719]: I1124 10:07:41.977665 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_9d9e3bfc-9c58-4534-89f9-72f35c264a80/probe/0.log" Nov 24 10:07:42 crc kubenswrapper[4719]: I1124 10:07:42.014241 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_44ceda2d-a4e3-4606-be8b-fa3806e4be38/cinder-scheduler/0.log" Nov 24 10:07:42 crc kubenswrapper[4719]: I1124 10:07:42.190792 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_44ceda2d-a4e3-4606-be8b-fa3806e4be38/probe/0.log" Nov 24 10:07:42 crc kubenswrapper[4719]: I1124 10:07:42.367064 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_82bfb246-8a64-46b7-9223-f2158b114186/cinder-volume/0.log" Nov 24 10:07:42 crc kubenswrapper[4719]: I1124 10:07:42.399403 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_82bfb246-8a64-46b7-9223-f2158b114186/probe/0.log" Nov 24 10:07:42 crc kubenswrapper[4719]: I1124 10:07:42.635432 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l_70b5dfb2-d163-4188-989e-e1f2a9d84026/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:07:42 crc kubenswrapper[4719]: I1124 10:07:42.708659 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-82qn6_9ebf3aed-eec5-4676-9f83-23ea070aa92e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:07:42 crc kubenswrapper[4719]: I1124 10:07:42.881444 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c846ff5b9-x2fxq_643db723-7fbb-4c9e-a815-fcfbc4eab02c/init/0.log" Nov 24 10:07:43 crc kubenswrapper[4719]: I1124 10:07:43.049664 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c846ff5b9-x2fxq_643db723-7fbb-4c9e-a815-fcfbc4eab02c/init/0.log" Nov 24 10:07:43 crc kubenswrapper[4719]: I1124 10:07:43.154418 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0b2a5521-1fe8-40c7-af69-18332a312c14/glance-httpd/0.log" Nov 24 10:07:43 crc kubenswrapper[4719]: I1124 10:07:43.251058 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c846ff5b9-x2fxq_643db723-7fbb-4c9e-a815-fcfbc4eab02c/dnsmasq-dns/0.log" Nov 24 10:07:43 crc kubenswrapper[4719]: I1124 10:07:43.303656 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0b2a5521-1fe8-40c7-af69-18332a312c14/glance-log/0.log" Nov 24 10:07:43 crc kubenswrapper[4719]: I1124 10:07:43.480206 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e745f799-46a2-4fd7-b32d-09a11558070b/glance-log/0.log" Nov 24 10:07:43 crc kubenswrapper[4719]: I1124 10:07:43.510231 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e745f799-46a2-4fd7-b32d-09a11558070b/glance-httpd/0.log" Nov 24 10:07:44 crc kubenswrapper[4719]: I1124 10:07:44.124484 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-2rf86_b1eec709-2c88-4a47-bc8b-51f49cc99053/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:07:44 crc kubenswrapper[4719]: I1124 10:07:44.186123 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5f6b7744d-ql24k_494049ce-0355-420c-9d3b-774f7befb12a/horizon/0.log" Nov 24 10:07:44 crc kubenswrapper[4719]: I1124 10:07:44.343420 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5f6b7744d-ql24k_494049ce-0355-420c-9d3b-774f7befb12a/horizon-log/0.log" Nov 24 10:07:44 crc kubenswrapper[4719]: I1124 10:07:44.427402 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dvw2g_b7e3784d-ae59-4dce-9c51-429e2361ee3b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:07:44 crc kubenswrapper[4719]: I1124 10:07:44.668149 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-c66bd98b8-qwf7d_4bfe0fc6-5440-468a-9ad6-6f9f6171e639/keystone-api/0.log" Nov 24 10:07:44 crc kubenswrapper[4719]: I1124 10:07:44.668383 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29399641-rwbnf_6ff61f4c-fc69-4299-987e-1c9ca3e1c633/keystone-cron/0.log" Nov 24 10:07:44 crc kubenswrapper[4719]: I1124 10:07:44.773695 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_cc7de5f2-3f27-47e7-a08e-f3b13211531a/kube-state-metrics/0.log" Nov 24 10:07:44 crc kubenswrapper[4719]: I1124 10:07:44.939911 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf_e45a8b91-3c8a-4471-852f-d648ddadcf6f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:07:44 crc kubenswrapper[4719]: I1124 10:07:44.978679 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_7db9d547-856d-42d1-a2b5-bdc02f69d938/manila-api/0.log" Nov 24 10:07:45 crc kubenswrapper[4719]: I1124 10:07:45.270214 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_7db9d547-856d-42d1-a2b5-bdc02f69d938/manila-api-log/0.log" Nov 24 10:07:45 crc kubenswrapper[4719]: I1124 10:07:45.459187 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_e101dc58-4d71-4456-aa34-e215690b34bf/manila-scheduler/0.log" Nov 24 10:07:46 crc kubenswrapper[4719]: I1124 10:07:46.105223 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_e101dc58-4d71-4456-aa34-e215690b34bf/probe/0.log" Nov 24 10:07:46 crc kubenswrapper[4719]: I1124 10:07:46.176323 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_880fcfd8-382a-4865-997b-203e11aad18d/probe/0.log" Nov 24 10:07:46 crc kubenswrapper[4719]: I1124 10:07:46.189385 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_880fcfd8-382a-4865-997b-203e11aad18d/manila-share/0.log" Nov 24 10:07:46 crc kubenswrapper[4719]: I1124 10:07:46.586492 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86d4855669-sjtqj_735cee72-40a1-4828-936f-9459f731b3da/neutron-api/0.log" Nov 24 10:07:46 crc kubenswrapper[4719]: I1124 10:07:46.609294 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86d4855669-sjtqj_735cee72-40a1-4828-936f-9459f731b3da/neutron-httpd/0.log" Nov 24 10:07:46 crc kubenswrapper[4719]: I1124 10:07:46.727966 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm_4e1b3223-80c0-40c5-9f45-833af2ab03be/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:07:47 crc kubenswrapper[4719]: I1124 10:07:47.358906 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_007b5bfc-1e0a-4468-87ae-5fae8c196871/nova-api-log/0.log" Nov 24 10:07:47 crc kubenswrapper[4719]: I1124 10:07:47.401234 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7bb7c808-2485-4aba-acd2-2b509f4ed607/nova-cell0-conductor-conductor/0.log" Nov 24 10:07:47 crc kubenswrapper[4719]: I1124 10:07:47.633222 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f/nova-cell1-conductor-conductor/0.log" Nov 24 10:07:47 crc kubenswrapper[4719]: I1124 10:07:47.650129 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_007b5bfc-1e0a-4468-87ae-5fae8c196871/nova-api-api/0.log" Nov 24 10:07:47 crc kubenswrapper[4719]: I1124 10:07:47.816105 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6229cd6f-c2de-47c4-9edf-99ebeddaf05b/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 10:07:48 crc kubenswrapper[4719]: I1124 10:07:48.066306 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45_c36f9bbf-22ba-458e-a531-081db1b99878/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:07:48 crc kubenswrapper[4719]: I1124 10:07:48.313108 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3facc49a-dd07-4db6-b353-a06ff01dc19c/nova-metadata-log/0.log" Nov 24 10:07:48 crc kubenswrapper[4719]: I1124 10:07:48.522330 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:07:48 crc kubenswrapper[4719]: E1124 10:07:48.522630 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:07:48 crc kubenswrapper[4719]: I1124 10:07:48.592073 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e543db5c-487f-4724-91aa-c3ea4cb33149/nova-scheduler-scheduler/0.log" Nov 24 10:07:48 crc kubenswrapper[4719]: I1124 10:07:48.682387 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_98cf534d-3e13-4443-901c-0755d91b2f09/mysql-bootstrap/0.log" Nov 24 10:07:48 crc kubenswrapper[4719]: I1124 10:07:48.867022 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_98cf534d-3e13-4443-901c-0755d91b2f09/mysql-bootstrap/0.log" Nov 24 10:07:48 crc kubenswrapper[4719]: I1124 10:07:48.887662 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_98cf534d-3e13-4443-901c-0755d91b2f09/galera/0.log" Nov 24 10:07:49 crc kubenswrapper[4719]: I1124 10:07:49.057973 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38/mysql-bootstrap/0.log" Nov 24 10:07:49 crc kubenswrapper[4719]: I1124 10:07:49.344723 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38/galera/0.log" Nov 24 10:07:49 crc kubenswrapper[4719]: I1124 10:07:49.371396 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38/mysql-bootstrap/0.log" Nov 24 10:07:49 crc kubenswrapper[4719]: I1124 10:07:49.567970 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_38d62700-956d-4aa3-a239-ff6fb8068ded/openstackclient/0.log" Nov 24 10:07:49 crc kubenswrapper[4719]: I1124 10:07:49.672005 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ccf6d_225b57e5-7f49-4b51-87db-6c790f23bf6e/ovn-controller/0.log" Nov 24 10:07:49 crc kubenswrapper[4719]: I1124 10:07:49.925680 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xdb6r_7bc3fe26-9fdd-4077-b4e1-6f9a35219a21/openstack-network-exporter/0.log" Nov 24 10:07:50 crc kubenswrapper[4719]: I1124 10:07:50.032584 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3facc49a-dd07-4db6-b353-a06ff01dc19c/nova-metadata-metadata/0.log" Nov 24 10:07:50 crc kubenswrapper[4719]: I1124 10:07:50.220308 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bk9qz_d36ea9cd-a7ed-463f-9ef5-58066e1446ed/ovsdb-server-init/0.log" Nov 24 10:07:50 crc kubenswrapper[4719]: I1124 10:07:50.496027 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bk9qz_d36ea9cd-a7ed-463f-9ef5-58066e1446ed/ovsdb-server/0.log" Nov 24 10:07:50 crc kubenswrapper[4719]: I1124 10:07:50.545425 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bk9qz_d36ea9cd-a7ed-463f-9ef5-58066e1446ed/ovs-vswitchd/0.log" Nov 24 10:07:50 crc kubenswrapper[4719]: I1124 10:07:50.581829 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bk9qz_d36ea9cd-a7ed-463f-9ef5-58066e1446ed/ovsdb-server-init/0.log" Nov 24 10:07:50 crc kubenswrapper[4719]: I1124 10:07:50.773787 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-6kl84_76df25ad-66c3-42d0-8539-b083731a87be/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:07:50 crc kubenswrapper[4719]: I1124 10:07:50.870231 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_73dcc2c6-9ccf-4682-bd39-3c439d4691a2/ovn-northd/0.log" Nov 24 10:07:50 crc kubenswrapper[4719]: I1124 10:07:50.915064 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_73dcc2c6-9ccf-4682-bd39-3c439d4691a2/openstack-network-exporter/0.log" Nov 24 10:07:51 crc kubenswrapper[4719]: I1124 10:07:51.150127 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_30c29a06-49fe-444c-befa-e10d67ac0e5e/openstack-network-exporter/0.log" Nov 24 10:07:51 crc kubenswrapper[4719]: I1124 10:07:51.268445 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_30c29a06-49fe-444c-befa-e10d67ac0e5e/ovsdbserver-nb/0.log" Nov 24 10:07:51 crc kubenswrapper[4719]: I1124 10:07:51.436700 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0be9bc93-deb3-4864-a259-dc32d2d64870/openstack-network-exporter/0.log" Nov 24 10:07:51 crc kubenswrapper[4719]: I1124 10:07:51.509682 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0be9bc93-deb3-4864-a259-dc32d2d64870/ovsdbserver-sb/0.log" Nov 24 10:07:51 crc kubenswrapper[4719]: I1124 10:07:51.726293 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5478d99856-2md7b_d70d9227-aa5e-4855-b4de-8bb688c24f34/placement-api/0.log" Nov 24 10:07:51 crc kubenswrapper[4719]: I1124 10:07:51.832656 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cdc73497-dc8e-44ef-b146-be6598f87eec/setup-container/0.log" Nov 24 10:07:51 crc kubenswrapper[4719]: I1124 10:07:51.961202 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5478d99856-2md7b_d70d9227-aa5e-4855-b4de-8bb688c24f34/placement-log/0.log" Nov 24 10:07:52 crc kubenswrapper[4719]: I1124 10:07:52.143126 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cdc73497-dc8e-44ef-b146-be6598f87eec/setup-container/0.log" Nov 24 10:07:52 crc kubenswrapper[4719]: I1124 10:07:52.229748 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_576b0826-aefe-4ef2-b0f8-77e8d7811a29/setup-container/0.log" Nov 24 10:07:52 crc kubenswrapper[4719]: I1124 10:07:52.244884 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cdc73497-dc8e-44ef-b146-be6598f87eec/rabbitmq/0.log" Nov 24 10:07:52 crc kubenswrapper[4719]: I1124 10:07:52.570433 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_576b0826-aefe-4ef2-b0f8-77e8d7811a29/setup-container/0.log" Nov 24 10:07:52 crc kubenswrapper[4719]: I1124 10:07:52.679255 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_576b0826-aefe-4ef2-b0f8-77e8d7811a29/rabbitmq/0.log" Nov 24 10:07:53 crc kubenswrapper[4719]: I1124 10:07:53.073937 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vb784_6aca06db-5628-433e-a1f4-f603fa8ece51/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:07:53 crc kubenswrapper[4719]: I1124 10:07:53.325685 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82_63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:07:53 crc kubenswrapper[4719]: I1124 10:07:53.418643 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-gj9mj_f686dd59-557a-4156-bf11-a0face9d15ea/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:07:53 crc kubenswrapper[4719]: I1124 10:07:53.648582 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-5pzbb_d2a2f001-9ea9-45a6-a2c6-6beb9de6b372/ssh-known-hosts-edpm-deployment/0.log" Nov 24 10:07:53 crc kubenswrapper[4719]: I1124 10:07:53.726929 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9c489706-83cc-4c99-9146-178f1efd5551/tempest-tests-tempest-tests-runner/0.log" Nov 24 10:07:53 crc kubenswrapper[4719]: I1124 10:07:53.926398 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_373e0d8e-a16a-4daa-8b4c-895994f91783/test-operator-logs-container/0.log" Nov 24 10:07:54 crc kubenswrapper[4719]: I1124 10:07:54.092217 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw_6d644fcc-6653-41e6-835d-430f31694bd1/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:08:03 crc kubenswrapper[4719]: I1124 10:08:03.520619 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:08:03 crc kubenswrapper[4719]: E1124 10:08:03.521562 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:08:07 crc kubenswrapper[4719]: I1124 10:08:07.210739 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_769e49a4-92ab-4c92-aebd-3c79f66a6227/memcached/0.log" Nov 24 10:08:16 crc kubenswrapper[4719]: I1124 10:08:16.521242 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:08:16 crc kubenswrapper[4719]: E1124 10:08:16.522256 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:08:24 crc kubenswrapper[4719]: I1124 10:08:24.438497 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-6hhz5_a2d8f034-9b1e-4b62-9c3a-ffc0c0379ad1/kube-rbac-proxy/0.log" Nov 24 10:08:24 crc kubenswrapper[4719]: I1124 10:08:24.508462 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-6hhz5_a2d8f034-9b1e-4b62-9c3a-ffc0c0379ad1/manager/0.log" Nov 24 10:08:24 crc kubenswrapper[4719]: I1124 10:08:24.630974 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-sf5qt_064a4ed4-46e3-4daf-8a9d-21c8475ba687/kube-rbac-proxy/0.log" Nov 24 10:08:24 crc kubenswrapper[4719]: I1124 10:08:24.711133 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-sf5qt_064a4ed4-46e3-4daf-8a9d-21c8475ba687/manager/0.log" Nov 24 10:08:24 crc kubenswrapper[4719]: I1124 10:08:24.882062 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-tjjkt_9d35d376-e7fb-41da-bf47-efd2e5f3ea57/kube-rbac-proxy/0.log" Nov 24 10:08:24 crc kubenswrapper[4719]: I1124 10:08:24.902523 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-tjjkt_9d35d376-e7fb-41da-bf47-efd2e5f3ea57/manager/0.log" Nov 24 10:08:25 crc kubenswrapper[4719]: I1124 10:08:25.052514 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp_f43e7773-89ab-406b-a3dc-5e20a490eafc/util/0.log" Nov 24 10:08:25 crc kubenswrapper[4719]: I1124 10:08:25.278671 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp_f43e7773-89ab-406b-a3dc-5e20a490eafc/pull/0.log" Nov 24 10:08:25 crc kubenswrapper[4719]: I1124 10:08:25.278758 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp_f43e7773-89ab-406b-a3dc-5e20a490eafc/pull/0.log" Nov 24 10:08:25 crc kubenswrapper[4719]: I1124 10:08:25.421700 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp_f43e7773-89ab-406b-a3dc-5e20a490eafc/util/0.log" Nov 24 10:08:25 crc kubenswrapper[4719]: I1124 10:08:25.501392 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp_f43e7773-89ab-406b-a3dc-5e20a490eafc/util/0.log" Nov 24 10:08:25 crc kubenswrapper[4719]: I1124 10:08:25.538291 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp_f43e7773-89ab-406b-a3dc-5e20a490eafc/extract/0.log" Nov 24 10:08:25 crc kubenswrapper[4719]: I1124 10:08:25.624470 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp_f43e7773-89ab-406b-a3dc-5e20a490eafc/pull/0.log" Nov 24 10:08:26 crc kubenswrapper[4719]: I1124 10:08:26.023458 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-c9h59_5dce0610-7470-47d2-ae74-ca7fccb82b1f/kube-rbac-proxy/0.log" Nov 24 10:08:26 crc kubenswrapper[4719]: I1124 10:08:26.083697 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-c9h59_5dce0610-7470-47d2-ae74-ca7fccb82b1f/manager/0.log" Nov 24 10:08:26 crc kubenswrapper[4719]: I1124 10:08:26.136628 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-xkfjt_5a2058d2-1589-484e-a5a1-de7e31af1a63/kube-rbac-proxy/0.log" Nov 24 10:08:26 crc kubenswrapper[4719]: I1124 10:08:26.239808 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-xkfjt_5a2058d2-1589-484e-a5a1-de7e31af1a63/manager/0.log" Nov 24 10:08:26 crc kubenswrapper[4719]: I1124 10:08:26.316564 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-j22wh_9d835ba0-d338-45db-b417-7087d4cced01/kube-rbac-proxy/0.log" Nov 24 10:08:26 crc kubenswrapper[4719]: I1124 10:08:26.375359 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-j22wh_9d835ba0-d338-45db-b417-7087d4cced01/manager/0.log" Nov 24 10:08:26 crc kubenswrapper[4719]: I1124 10:08:26.935780 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-fhb77_08979ac6-d1d0-4ef7-8996-5b02e8e8dae6/kube-rbac-proxy/0.log" Nov 24 10:08:27 crc kubenswrapper[4719]: I1124 10:08:27.168956 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-fhb77_08979ac6-d1d0-4ef7-8996-5b02e8e8dae6/manager/0.log" Nov 24 10:08:27 crc kubenswrapper[4719]: I1124 10:08:27.286605 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-4sxvh_231d0c7b-d43e-4169-8b4e-940289894809/kube-rbac-proxy/0.log" Nov 24 10:08:27 crc kubenswrapper[4719]: I1124 10:08:27.287839 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-4sxvh_231d0c7b-d43e-4169-8b4e-940289894809/manager/0.log" Nov 24 10:08:27 crc kubenswrapper[4719]: I1124 10:08:27.396461 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-lsd4k_17ddd27a-66d1-4d80-abc7-80fde501fa8d/kube-rbac-proxy/0.log" Nov 24 10:08:27 crc kubenswrapper[4719]: I1124 10:08:27.508628 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-lz2r8_23502fbc-6d87-4ca2-80b3-d5af1e94205e/kube-rbac-proxy/0.log" Nov 24 10:08:27 crc kubenswrapper[4719]: I1124 10:08:27.521022 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:08:27 crc kubenswrapper[4719]: E1124 10:08:27.529994 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:08:27 crc kubenswrapper[4719]: I1124 10:08:27.564946 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-lsd4k_17ddd27a-66d1-4d80-abc7-80fde501fa8d/manager/0.log" Nov 24 10:08:27 crc kubenswrapper[4719]: I1124 10:08:27.702761 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-lz2r8_23502fbc-6d87-4ca2-80b3-d5af1e94205e/manager/0.log" Nov 24 10:08:27 crc kubenswrapper[4719]: I1124 10:08:27.860905 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-r2r85_a0a59a11-1bf3-4ff8-8496-9414bc0ae549/kube-rbac-proxy/0.log" Nov 24 10:08:27 crc kubenswrapper[4719]: I1124 10:08:27.907823 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-r2r85_a0a59a11-1bf3-4ff8-8496-9414bc0ae549/manager/0.log" Nov 24 10:08:28 crc kubenswrapper[4719]: I1124 10:08:28.014179 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-lthw6_30241c11-005e-4410-ad1a-71d6c5c0910f/kube-rbac-proxy/0.log" Nov 24 10:08:28 crc kubenswrapper[4719]: I1124 10:08:28.095128 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-lthw6_30241c11-005e-4410-ad1a-71d6c5c0910f/manager/0.log" Nov 24 10:08:28 crc kubenswrapper[4719]: I1124 10:08:28.273797 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-plrvj_070e32a3-4fa9-4ab4-9e55-d76c0c87db3c/kube-rbac-proxy/0.log" Nov 24 10:08:28 crc kubenswrapper[4719]: I1124 10:08:28.353453 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-rnvl8_1d3d3b38-b3f5-49cc-aa73-40fb03afd3ce/kube-rbac-proxy/0.log" Nov 24 10:08:28 crc kubenswrapper[4719]: I1124 10:08:28.362772 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-plrvj_070e32a3-4fa9-4ab4-9e55-d76c0c87db3c/manager/0.log" Nov 24 10:08:28 crc kubenswrapper[4719]: I1124 10:08:28.390822 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-rnvl8_1d3d3b38-b3f5-49cc-aa73-40fb03afd3ce/manager/0.log" Nov 24 10:08:28 crc kubenswrapper[4719]: I1124 10:08:28.470979 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-lf45p_643149e5-3960-4912-a497-c0cb9c0e722f/kube-rbac-proxy/0.log" Nov 24 10:08:28 crc kubenswrapper[4719]: I1124 10:08:28.552455 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-lf45p_643149e5-3960-4912-a497-c0cb9c0e722f/manager/0.log" Nov 24 10:08:28 crc kubenswrapper[4719]: I1124 10:08:28.622688 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5f88c7d9f9-n97nx_37253c68-54fd-490c-9486-f2a4f2ffe834/kube-rbac-proxy/0.log" Nov 24 10:08:28 crc kubenswrapper[4719]: I1124 10:08:28.891224 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-56cb4fc9f6-bx26b_2065277b-46c2-4b27-9458-f671c1319c76/kube-rbac-proxy/0.log" Nov 24 10:08:29 crc kubenswrapper[4719]: I1124 10:08:29.037084 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-czgfr_96d6d0aa-864c-432b-a1c1-5eef084a21b1/registry-server/0.log" Nov 24 10:08:29 crc kubenswrapper[4719]: I1124 10:08:29.149397 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-56cb4fc9f6-bx26b_2065277b-46c2-4b27-9458-f671c1319c76/operator/0.log" Nov 24 10:08:29 crc kubenswrapper[4719]: I1124 10:08:29.364708 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-gqnbl_c4688244-99a9-4a75-8501-b1062f24b517/kube-rbac-proxy/0.log" Nov 24 10:08:29 crc kubenswrapper[4719]: I1124 10:08:29.380884 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-gqnbl_c4688244-99a9-4a75-8501-b1062f24b517/manager/0.log" Nov 24 10:08:29 crc kubenswrapper[4719]: I1124 10:08:29.517361 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-d4vvj_a951b65e-e9bd-43bc-9fa0-673642653e4c/kube-rbac-proxy/0.log" Nov 24 10:08:29 crc kubenswrapper[4719]: I1124 10:08:29.667304 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-d4vvj_a951b65e-e9bd-43bc-9fa0-673642653e4c/manager/0.log" Nov 24 10:08:29 crc kubenswrapper[4719]: I1124 10:08:29.830982 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj_33185bd6-40f2-4fb4-83b0-dd469f48598f/operator/0.log" Nov 24 10:08:30 crc kubenswrapper[4719]: I1124 10:08:30.010383 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-tlsj6_3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b/manager/0.log" Nov 24 10:08:30 crc kubenswrapper[4719]: I1124 10:08:30.016557 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-tlsj6_3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b/kube-rbac-proxy/0.log" Nov 24 10:08:30 crc kubenswrapper[4719]: I1124 10:08:30.145980 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-m828t_714fe5a8-a778-4366-8823-868dd1210515/kube-rbac-proxy/0.log" Nov 24 10:08:30 crc kubenswrapper[4719]: I1124 10:08:30.202976 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5f88c7d9f9-n97nx_37253c68-54fd-490c-9486-f2a4f2ffe834/manager/0.log" Nov 24 10:08:30 crc kubenswrapper[4719]: I1124 10:08:30.341298 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-bks8t_7cfebe98-a194-4c28-861f-a80f9f9f22de/kube-rbac-proxy/0.log" Nov 24 10:08:30 crc kubenswrapper[4719]: I1124 10:08:30.357970 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-m828t_714fe5a8-a778-4366-8823-868dd1210515/manager/0.log" Nov 24 10:08:30 crc kubenswrapper[4719]: I1124 10:08:30.419862 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-bks8t_7cfebe98-a194-4c28-861f-a80f9f9f22de/manager/0.log" Nov 24 10:08:30 crc kubenswrapper[4719]: I1124 10:08:30.513877 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-br6f4_d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc/kube-rbac-proxy/0.log" Nov 24 10:08:30 crc kubenswrapper[4719]: I1124 10:08:30.578403 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-br6f4_d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc/manager/0.log" Nov 24 10:08:39 crc kubenswrapper[4719]: I1124 10:08:39.521469 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:08:39 crc kubenswrapper[4719]: E1124 10:08:39.522261 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:08:49 crc kubenswrapper[4719]: I1124 10:08:49.153124 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jpl9f_f42a4caa-e790-4ec2-a6fd-28d97cafcf32/control-plane-machine-set-operator/0.log" Nov 24 10:08:49 crc kubenswrapper[4719]: I1124 10:08:49.349255 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jkf8p_613468f4-6a02-4828-8873-01bccb4b2c43/machine-api-operator/0.log" Nov 24 10:08:49 crc kubenswrapper[4719]: I1124 10:08:49.376112 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jkf8p_613468f4-6a02-4828-8873-01bccb4b2c43/kube-rbac-proxy/0.log" Nov 24 10:08:52 crc kubenswrapper[4719]: I1124 10:08:52.522424 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:08:52 crc kubenswrapper[4719]: E1124 10:08:52.523654 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:09:03 crc kubenswrapper[4719]: I1124 10:09:03.030517 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-rwrqz_6810bbaf-a058-4255-a776-13435cfd7f16/cert-manager-controller/0.log" Nov 24 10:09:03 crc kubenswrapper[4719]: I1124 10:09:03.169259 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-qg4fz_2e8b2163-ffd6-4935-a172-bdae97882475/cert-manager-cainjector/0.log" Nov 24 10:09:03 crc kubenswrapper[4719]: I1124 10:09:03.248917 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-w9hp2_55b792be-fd7f-49c7-b9c9-e90acd66701a/cert-manager-webhook/0.log" Nov 24 10:09:07 crc kubenswrapper[4719]: I1124 10:09:07.520781 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:09:07 crc kubenswrapper[4719]: E1124 10:09:07.521343 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:09:14 crc kubenswrapper[4719]: I1124 10:09:14.828482 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-4ssqk_789cda50-c0b4-40be-88a7-9af3409bc49c/nmstate-console-plugin/0.log" Nov 24 10:09:15 crc kubenswrapper[4719]: I1124 10:09:15.193791 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-dd5zz_6b698c0f-63ea-4883-8771-f8b53718d191/nmstate-handler/0.log" Nov 24 10:09:15 crc kubenswrapper[4719]: I1124 10:09:15.221426 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-r5mnn_e0130b51-d625-42b0-9f57-018da660dddd/kube-rbac-proxy/0.log" Nov 24 10:09:15 crc kubenswrapper[4719]: I1124 10:09:15.278762 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-r5mnn_e0130b51-d625-42b0-9f57-018da660dddd/nmstate-metrics/0.log" Nov 24 10:09:15 crc kubenswrapper[4719]: I1124 10:09:15.540446 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-2w459_875211b7-4698-4cb8-b214-1665dd3a1a77/nmstate-operator/0.log" Nov 24 10:09:15 crc kubenswrapper[4719]: I1124 10:09:15.541345 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-bxtbn_a11d83d8-730f-4b57-bc95-e0506f69539d/nmstate-webhook/0.log" Nov 24 10:09:18 crc kubenswrapper[4719]: I1124 10:09:18.521707 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:09:18 crc kubenswrapper[4719]: E1124 10:09:18.523792 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:09:29 crc kubenswrapper[4719]: I1124 10:09:29.521670 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:09:29 crc kubenswrapper[4719]: E1124 10:09:29.522473 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:09:31 crc kubenswrapper[4719]: I1124 10:09:31.663117 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-2d8hg_89bc3754-b51b-44ed-9c94-5d7f074446e2/kube-rbac-proxy/0.log" Nov 24 10:09:31 crc kubenswrapper[4719]: I1124 10:09:31.684668 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-2d8hg_89bc3754-b51b-44ed-9c94-5d7f074446e2/controller/0.log" Nov 24 10:09:31 crc kubenswrapper[4719]: I1124 10:09:31.830805 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-frr-files/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.018564 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-metrics/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.078903 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-frr-files/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.087735 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-reloader/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.087741 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-reloader/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.246104 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-reloader/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.254260 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-frr-files/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.300074 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-metrics/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.343020 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-metrics/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.549429 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-reloader/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.576233 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-metrics/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.613212 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/controller/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.625852 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-frr-files/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.837473 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/frr-metrics/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.948577 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/kube-rbac-proxy/0.log" Nov 24 10:09:32 crc kubenswrapper[4719]: I1124 10:09:32.972051 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/kube-rbac-proxy-frr/0.log" Nov 24 10:09:33 crc kubenswrapper[4719]: I1124 10:09:33.068736 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/reloader/0.log" Nov 24 10:09:33 crc kubenswrapper[4719]: I1124 10:09:33.239844 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-s55w7_c3fe3e56-b4b2-48c9-9b95-5aa984326faa/frr-k8s-webhook-server/0.log" Nov 24 10:09:33 crc kubenswrapper[4719]: I1124 10:09:33.518788 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-c6ccddcb9-hhfps_053b9219-602e-4d52-af3d-a6e039be213e/manager/0.log" Nov 24 10:09:33 crc kubenswrapper[4719]: I1124 10:09:33.607464 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-596c48c889-kksvs_fc753907-15ea-4768-8c53-e78830249c42/webhook-server/0.log" Nov 24 10:09:33 crc kubenswrapper[4719]: I1124 10:09:33.873651 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6lqkr_ce9d612a-d5e7-4ab8-809e-97155ecda8ef/kube-rbac-proxy/0.log" Nov 24 10:09:34 crc kubenswrapper[4719]: I1124 10:09:34.113799 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/frr/0.log" Nov 24 10:09:34 crc kubenswrapper[4719]: I1124 10:09:34.308461 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6lqkr_ce9d612a-d5e7-4ab8-809e-97155ecda8ef/speaker/0.log" Nov 24 10:09:40 crc kubenswrapper[4719]: I1124 10:09:40.521570 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:09:40 crc kubenswrapper[4719]: E1124 10:09:40.522378 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:09:47 crc kubenswrapper[4719]: I1124 10:09:47.712438 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9_8267c94c-41ea-4889-bd9f-398571d09747/util/0.log" Nov 24 10:09:47 crc kubenswrapper[4719]: I1124 10:09:47.858720 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9_8267c94c-41ea-4889-bd9f-398571d09747/util/0.log" Nov 24 10:09:47 crc kubenswrapper[4719]: I1124 10:09:47.947495 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9_8267c94c-41ea-4889-bd9f-398571d09747/pull/0.log" Nov 24 10:09:47 crc kubenswrapper[4719]: I1124 10:09:47.947512 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9_8267c94c-41ea-4889-bd9f-398571d09747/pull/0.log" Nov 24 10:09:48 crc kubenswrapper[4719]: I1124 10:09:48.140177 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9_8267c94c-41ea-4889-bd9f-398571d09747/util/0.log" Nov 24 10:09:48 crc kubenswrapper[4719]: I1124 10:09:48.175173 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9_8267c94c-41ea-4889-bd9f-398571d09747/pull/0.log" Nov 24 10:09:48 crc kubenswrapper[4719]: I1124 10:09:48.195991 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9_8267c94c-41ea-4889-bd9f-398571d09747/extract/0.log" Nov 24 10:09:48 crc kubenswrapper[4719]: I1124 10:09:48.309144 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k8b6n_d4fa8925-5590-43e3-b4a1-4c1bda621334/extract-utilities/0.log" Nov 24 10:09:48 crc kubenswrapper[4719]: I1124 10:09:48.530229 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k8b6n_d4fa8925-5590-43e3-b4a1-4c1bda621334/extract-content/0.log" Nov 24 10:09:48 crc kubenswrapper[4719]: I1124 10:09:48.533946 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k8b6n_d4fa8925-5590-43e3-b4a1-4c1bda621334/extract-content/0.log" Nov 24 10:09:48 crc kubenswrapper[4719]: I1124 10:09:48.571523 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k8b6n_d4fa8925-5590-43e3-b4a1-4c1bda621334/extract-utilities/0.log" Nov 24 10:09:48 crc kubenswrapper[4719]: I1124 10:09:48.713772 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k8b6n_d4fa8925-5590-43e3-b4a1-4c1bda621334/extract-content/0.log" Nov 24 10:09:48 crc kubenswrapper[4719]: I1124 10:09:48.744792 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k8b6n_d4fa8925-5590-43e3-b4a1-4c1bda621334/extract-utilities/0.log" Nov 24 10:09:49 crc kubenswrapper[4719]: I1124 10:09:49.039380 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhxvl_55e4ac5d-677d-41b4-b3c8-adaac9928f7d/extract-utilities/0.log" Nov 24 10:09:49 crc kubenswrapper[4719]: I1124 10:09:49.205021 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k8b6n_d4fa8925-5590-43e3-b4a1-4c1bda621334/registry-server/0.log" Nov 24 10:09:49 crc kubenswrapper[4719]: I1124 10:09:49.252250 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhxvl_55e4ac5d-677d-41b4-b3c8-adaac9928f7d/extract-content/0.log" Nov 24 10:09:49 crc kubenswrapper[4719]: I1124 10:09:49.289668 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhxvl_55e4ac5d-677d-41b4-b3c8-adaac9928f7d/extract-utilities/0.log" Nov 24 10:09:49 crc kubenswrapper[4719]: I1124 10:09:49.320128 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhxvl_55e4ac5d-677d-41b4-b3c8-adaac9928f7d/extract-content/0.log" Nov 24 10:09:49 crc kubenswrapper[4719]: I1124 10:09:49.526919 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhxvl_55e4ac5d-677d-41b4-b3c8-adaac9928f7d/extract-content/0.log" Nov 24 10:09:49 crc kubenswrapper[4719]: I1124 10:09:49.533457 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhxvl_55e4ac5d-677d-41b4-b3c8-adaac9928f7d/extract-utilities/0.log" Nov 24 10:09:49 crc kubenswrapper[4719]: I1124 10:09:49.768600 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv_441dcc7a-e87d-4f62-a1e8-79ec5e961ce3/util/0.log" Nov 24 10:09:50 crc kubenswrapper[4719]: I1124 10:09:50.110459 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv_441dcc7a-e87d-4f62-a1e8-79ec5e961ce3/util/0.log" Nov 24 10:09:50 crc kubenswrapper[4719]: I1124 10:09:50.166096 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv_441dcc7a-e87d-4f62-a1e8-79ec5e961ce3/pull/0.log" Nov 24 10:09:50 crc kubenswrapper[4719]: I1124 10:09:50.167168 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv_441dcc7a-e87d-4f62-a1e8-79ec5e961ce3/pull/0.log" Nov 24 10:09:50 crc kubenswrapper[4719]: I1124 10:09:50.188994 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhxvl_55e4ac5d-677d-41b4-b3c8-adaac9928f7d/registry-server/0.log" Nov 24 10:09:50 crc kubenswrapper[4719]: I1124 10:09:50.311126 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv_441dcc7a-e87d-4f62-a1e8-79ec5e961ce3/util/0.log" Nov 24 10:09:50 crc kubenswrapper[4719]: I1124 10:09:50.391837 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv_441dcc7a-e87d-4f62-a1e8-79ec5e961ce3/extract/0.log" Nov 24 10:09:50 crc kubenswrapper[4719]: I1124 10:09:50.410611 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv_441dcc7a-e87d-4f62-a1e8-79ec5e961ce3/pull/0.log" Nov 24 10:09:50 crc kubenswrapper[4719]: I1124 10:09:50.523457 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mlglm_304abde6-d85e-4425-93f5-af2b501ab1c9/marketplace-operator/0.log" Nov 24 10:09:50 crc kubenswrapper[4719]: I1124 10:09:50.647129 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54tzs_56525057-4157-4fce-9288-ddae977d1037/extract-utilities/0.log" Nov 24 10:09:50 crc kubenswrapper[4719]: I1124 10:09:50.851234 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54tzs_56525057-4157-4fce-9288-ddae977d1037/extract-utilities/0.log" Nov 24 10:09:50 crc kubenswrapper[4719]: I1124 10:09:50.930528 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54tzs_56525057-4157-4fce-9288-ddae977d1037/extract-content/0.log" Nov 24 10:09:50 crc kubenswrapper[4719]: I1124 10:09:50.995086 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54tzs_56525057-4157-4fce-9288-ddae977d1037/extract-content/0.log" Nov 24 10:09:51 crc kubenswrapper[4719]: I1124 10:09:51.132249 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54tzs_56525057-4157-4fce-9288-ddae977d1037/extract-utilities/0.log" Nov 24 10:09:51 crc kubenswrapper[4719]: I1124 10:09:51.253903 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54tzs_56525057-4157-4fce-9288-ddae977d1037/extract-content/0.log" Nov 24 10:09:51 crc kubenswrapper[4719]: I1124 10:09:51.309905 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54tzs_56525057-4157-4fce-9288-ddae977d1037/registry-server/0.log" Nov 24 10:09:51 crc kubenswrapper[4719]: I1124 10:09:51.416104 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h69nz_2d0dbb1b-45d0-4aa1-b76e-723a630b9105/extract-utilities/0.log" Nov 24 10:09:51 crc kubenswrapper[4719]: I1124 10:09:51.604626 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h69nz_2d0dbb1b-45d0-4aa1-b76e-723a630b9105/extract-content/0.log" Nov 24 10:09:51 crc kubenswrapper[4719]: I1124 10:09:51.613575 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h69nz_2d0dbb1b-45d0-4aa1-b76e-723a630b9105/extract-utilities/0.log" Nov 24 10:09:51 crc kubenswrapper[4719]: I1124 10:09:51.630079 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h69nz_2d0dbb1b-45d0-4aa1-b76e-723a630b9105/extract-content/0.log" Nov 24 10:09:51 crc kubenswrapper[4719]: I1124 10:09:51.773396 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h69nz_2d0dbb1b-45d0-4aa1-b76e-723a630b9105/extract-content/0.log" Nov 24 10:09:51 crc kubenswrapper[4719]: I1124 10:09:51.781702 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h69nz_2d0dbb1b-45d0-4aa1-b76e-723a630b9105/extract-utilities/0.log" Nov 24 10:09:52 crc kubenswrapper[4719]: I1124 10:09:52.226110 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h69nz_2d0dbb1b-45d0-4aa1-b76e-723a630b9105/registry-server/0.log" Nov 24 10:09:52 crc kubenswrapper[4719]: I1124 10:09:52.521885 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:09:52 crc kubenswrapper[4719]: E1124 10:09:52.522963 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:10:06 crc kubenswrapper[4719]: I1124 10:10:06.521812 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:10:06 crc kubenswrapper[4719]: E1124 10:10:06.523757 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:10:19 crc kubenswrapper[4719]: I1124 10:10:19.521371 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:10:19 crc kubenswrapper[4719]: E1124 10:10:19.523204 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:10:30 crc kubenswrapper[4719]: I1124 10:10:30.521754 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:10:30 crc kubenswrapper[4719]: E1124 10:10:30.522559 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:10:43 crc kubenswrapper[4719]: I1124 10:10:43.520794 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:10:43 crc kubenswrapper[4719]: E1124 10:10:43.521875 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:10:54 crc kubenswrapper[4719]: I1124 10:10:54.528090 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:10:54 crc kubenswrapper[4719]: E1124 10:10:54.528906 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:11:08 crc kubenswrapper[4719]: I1124 10:11:08.521382 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:11:08 crc kubenswrapper[4719]: E1124 10:11:08.522990 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:11:20 crc kubenswrapper[4719]: I1124 10:11:20.521357 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:11:20 crc kubenswrapper[4719]: E1124 10:11:20.522078 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:11:34 crc kubenswrapper[4719]: I1124 10:11:34.533160 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:11:34 crc kubenswrapper[4719]: E1124 10:11:34.534391 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:11:49 crc kubenswrapper[4719]: I1124 10:11:49.521576 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:11:49 crc kubenswrapper[4719]: E1124 10:11:49.522693 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:12:04 crc kubenswrapper[4719]: I1124 10:12:04.532570 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:12:04 crc kubenswrapper[4719]: E1124 10:12:04.533273 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:12:13 crc kubenswrapper[4719]: I1124 10:12:13.476915 4719 generic.go:334] "Generic (PLEG): container finished" podID="e334aa29-ee2e-42cf-802e-44b527bd837a" containerID="536c12f1454b919afbaa74d738ede510cd50daec76f039329621722c68ca62bc" exitCode=0 Nov 24 10:12:13 crc kubenswrapper[4719]: I1124 10:12:13.477084 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7ct94/must-gather-bvptt" event={"ID":"e334aa29-ee2e-42cf-802e-44b527bd837a","Type":"ContainerDied","Data":"536c12f1454b919afbaa74d738ede510cd50daec76f039329621722c68ca62bc"} Nov 24 10:12:13 crc kubenswrapper[4719]: I1124 10:12:13.478097 4719 scope.go:117] "RemoveContainer" containerID="536c12f1454b919afbaa74d738ede510cd50daec76f039329621722c68ca62bc" Nov 24 10:12:13 crc kubenswrapper[4719]: I1124 10:12:13.737121 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7ct94_must-gather-bvptt_e334aa29-ee2e-42cf-802e-44b527bd837a/gather/0.log" Nov 24 10:12:13 crc kubenswrapper[4719]: I1124 10:12:13.955558 4719 scope.go:117] "RemoveContainer" containerID="409600c6505e0921c03d23d81dfff32a3c100d2b6909989280fa4524868c59d0" Nov 24 10:12:19 crc kubenswrapper[4719]: I1124 10:12:19.521801 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:12:19 crc kubenswrapper[4719]: E1124 10:12:19.522842 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:12:21 crc kubenswrapper[4719]: I1124 10:12:21.835279 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7ct94/must-gather-bvptt"] Nov 24 10:12:21 crc kubenswrapper[4719]: I1124 10:12:21.835981 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7ct94/must-gather-bvptt" podUID="e334aa29-ee2e-42cf-802e-44b527bd837a" containerName="copy" containerID="cri-o://7775123b7c97752de9901afc5aba1ff5386bbcbf0b6affcd9b5f6605187257a9" gracePeriod=2 Nov 24 10:12:21 crc kubenswrapper[4719]: I1124 10:12:21.843836 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7ct94/must-gather-bvptt"] Nov 24 10:12:22 crc kubenswrapper[4719]: I1124 10:12:22.580560 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7ct94_must-gather-bvptt_e334aa29-ee2e-42cf-802e-44b527bd837a/copy/0.log" Nov 24 10:12:22 crc kubenswrapper[4719]: I1124 10:12:22.581289 4719 generic.go:334] "Generic (PLEG): container finished" podID="e334aa29-ee2e-42cf-802e-44b527bd837a" containerID="7775123b7c97752de9901afc5aba1ff5386bbcbf0b6affcd9b5f6605187257a9" exitCode=143 Nov 24 10:12:22 crc kubenswrapper[4719]: I1124 10:12:22.708470 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7ct94_must-gather-bvptt_e334aa29-ee2e-42cf-802e-44b527bd837a/copy/0.log" Nov 24 10:12:22 crc kubenswrapper[4719]: I1124 10:12:22.708807 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/must-gather-bvptt" Nov 24 10:12:22 crc kubenswrapper[4719]: I1124 10:12:22.831301 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e334aa29-ee2e-42cf-802e-44b527bd837a-must-gather-output\") pod \"e334aa29-ee2e-42cf-802e-44b527bd837a\" (UID: \"e334aa29-ee2e-42cf-802e-44b527bd837a\") " Nov 24 10:12:22 crc kubenswrapper[4719]: I1124 10:12:22.831445 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qchg\" (UniqueName: \"kubernetes.io/projected/e334aa29-ee2e-42cf-802e-44b527bd837a-kube-api-access-6qchg\") pod \"e334aa29-ee2e-42cf-802e-44b527bd837a\" (UID: \"e334aa29-ee2e-42cf-802e-44b527bd837a\") " Nov 24 10:12:22 crc kubenswrapper[4719]: I1124 10:12:22.846689 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e334aa29-ee2e-42cf-802e-44b527bd837a-kube-api-access-6qchg" (OuterVolumeSpecName: "kube-api-access-6qchg") pod "e334aa29-ee2e-42cf-802e-44b527bd837a" (UID: "e334aa29-ee2e-42cf-802e-44b527bd837a"). InnerVolumeSpecName "kube-api-access-6qchg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:12:22 crc kubenswrapper[4719]: I1124 10:12:22.935501 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qchg\" (UniqueName: \"kubernetes.io/projected/e334aa29-ee2e-42cf-802e-44b527bd837a-kube-api-access-6qchg\") on node \"crc\" DevicePath \"\"" Nov 24 10:12:23 crc kubenswrapper[4719]: I1124 10:12:23.043339 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e334aa29-ee2e-42cf-802e-44b527bd837a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e334aa29-ee2e-42cf-802e-44b527bd837a" (UID: "e334aa29-ee2e-42cf-802e-44b527bd837a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:12:23 crc kubenswrapper[4719]: I1124 10:12:23.139856 4719 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e334aa29-ee2e-42cf-802e-44b527bd837a-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 10:12:23 crc kubenswrapper[4719]: I1124 10:12:23.598257 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7ct94_must-gather-bvptt_e334aa29-ee2e-42cf-802e-44b527bd837a/copy/0.log" Nov 24 10:12:23 crc kubenswrapper[4719]: I1124 10:12:23.599282 4719 scope.go:117] "RemoveContainer" containerID="7775123b7c97752de9901afc5aba1ff5386bbcbf0b6affcd9b5f6605187257a9" Nov 24 10:12:23 crc kubenswrapper[4719]: I1124 10:12:23.599447 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7ct94/must-gather-bvptt" Nov 24 10:12:23 crc kubenswrapper[4719]: I1124 10:12:23.644164 4719 scope.go:117] "RemoveContainer" containerID="536c12f1454b919afbaa74d738ede510cd50daec76f039329621722c68ca62bc" Nov 24 10:12:24 crc kubenswrapper[4719]: I1124 10:12:24.531775 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e334aa29-ee2e-42cf-802e-44b527bd837a" path="/var/lib/kubelet/pods/e334aa29-ee2e-42cf-802e-44b527bd837a/volumes" Nov 24 10:12:33 crc kubenswrapper[4719]: I1124 10:12:33.521851 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:12:33 crc kubenswrapper[4719]: E1124 10:12:33.522767 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.718729 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ztc5t"] Nov 24 10:12:45 crc kubenswrapper[4719]: E1124 10:12:45.719710 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27386b4c-fd74-4b04-89af-c8e23cfa6c9a" containerName="container-00" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.719725 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="27386b4c-fd74-4b04-89af-c8e23cfa6c9a" containerName="container-00" Nov 24 10:12:45 crc kubenswrapper[4719]: E1124 10:12:45.719748 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e334aa29-ee2e-42cf-802e-44b527bd837a" containerName="copy" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.719756 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="e334aa29-ee2e-42cf-802e-44b527bd837a" containerName="copy" Nov 24 10:12:45 crc kubenswrapper[4719]: E1124 10:12:45.719770 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e334aa29-ee2e-42cf-802e-44b527bd837a" containerName="gather" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.719778 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="e334aa29-ee2e-42cf-802e-44b527bd837a" containerName="gather" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.719986 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="e334aa29-ee2e-42cf-802e-44b527bd837a" containerName="gather" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.720008 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="e334aa29-ee2e-42cf-802e-44b527bd837a" containerName="copy" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.720021 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="27386b4c-fd74-4b04-89af-c8e23cfa6c9a" containerName="container-00" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.721804 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.743795 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ztc5t"] Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.777086 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7mw4\" (UniqueName: \"kubernetes.io/projected/28354a2c-0a0f-4030-964b-b206606ba426-kube-api-access-c7mw4\") pod \"certified-operators-ztc5t\" (UID: \"28354a2c-0a0f-4030-964b-b206606ba426\") " pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.777164 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28354a2c-0a0f-4030-964b-b206606ba426-utilities\") pod \"certified-operators-ztc5t\" (UID: \"28354a2c-0a0f-4030-964b-b206606ba426\") " pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.777244 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28354a2c-0a0f-4030-964b-b206606ba426-catalog-content\") pod \"certified-operators-ztc5t\" (UID: \"28354a2c-0a0f-4030-964b-b206606ba426\") " pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.878443 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28354a2c-0a0f-4030-964b-b206606ba426-utilities\") pod \"certified-operators-ztc5t\" (UID: \"28354a2c-0a0f-4030-964b-b206606ba426\") " pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.878539 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28354a2c-0a0f-4030-964b-b206606ba426-catalog-content\") pod \"certified-operators-ztc5t\" (UID: \"28354a2c-0a0f-4030-964b-b206606ba426\") " pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.878597 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7mw4\" (UniqueName: \"kubernetes.io/projected/28354a2c-0a0f-4030-964b-b206606ba426-kube-api-access-c7mw4\") pod \"certified-operators-ztc5t\" (UID: \"28354a2c-0a0f-4030-964b-b206606ba426\") " pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.879406 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28354a2c-0a0f-4030-964b-b206606ba426-utilities\") pod \"certified-operators-ztc5t\" (UID: \"28354a2c-0a0f-4030-964b-b206606ba426\") " pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.879487 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28354a2c-0a0f-4030-964b-b206606ba426-catalog-content\") pod \"certified-operators-ztc5t\" (UID: \"28354a2c-0a0f-4030-964b-b206606ba426\") " pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:45 crc kubenswrapper[4719]: I1124 10:12:45.901924 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7mw4\" (UniqueName: \"kubernetes.io/projected/28354a2c-0a0f-4030-964b-b206606ba426-kube-api-access-c7mw4\") pod \"certified-operators-ztc5t\" (UID: \"28354a2c-0a0f-4030-964b-b206606ba426\") " pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:46 crc kubenswrapper[4719]: I1124 10:12:46.054929 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:46 crc kubenswrapper[4719]: I1124 10:12:46.691702 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ztc5t"] Nov 24 10:12:46 crc kubenswrapper[4719]: W1124 10:12:46.696314 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28354a2c_0a0f_4030_964b_b206606ba426.slice/crio-f1bef1e5927fff39edda67d023f2bfe7fc51609f38e49bd4747dde442b97a810 WatchSource:0}: Error finding container f1bef1e5927fff39edda67d023f2bfe7fc51609f38e49bd4747dde442b97a810: Status 404 returned error can't find the container with id f1bef1e5927fff39edda67d023f2bfe7fc51609f38e49bd4747dde442b97a810 Nov 24 10:12:46 crc kubenswrapper[4719]: I1124 10:12:46.801402 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztc5t" event={"ID":"28354a2c-0a0f-4030-964b-b206606ba426","Type":"ContainerStarted","Data":"f1bef1e5927fff39edda67d023f2bfe7fc51609f38e49bd4747dde442b97a810"} Nov 24 10:12:47 crc kubenswrapper[4719]: I1124 10:12:47.813893 4719 generic.go:334] "Generic (PLEG): container finished" podID="28354a2c-0a0f-4030-964b-b206606ba426" containerID="3c5b9d2078fd6dca4eecb626579fcf3f736f6a49216a3d513207ce7ba2ec5076" exitCode=0 Nov 24 10:12:47 crc kubenswrapper[4719]: I1124 10:12:47.815443 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztc5t" event={"ID":"28354a2c-0a0f-4030-964b-b206606ba426","Type":"ContainerDied","Data":"3c5b9d2078fd6dca4eecb626579fcf3f736f6a49216a3d513207ce7ba2ec5076"} Nov 24 10:12:47 crc kubenswrapper[4719]: I1124 10:12:47.818105 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 10:12:48 crc kubenswrapper[4719]: I1124 10:12:48.542524 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:12:48 crc kubenswrapper[4719]: I1124 10:12:48.825783 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"7c367fa3d99ee232632dd218f86db975241bce842b32aed9d95c60ebe991c37c"} Nov 24 10:12:48 crc kubenswrapper[4719]: I1124 10:12:48.831577 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztc5t" event={"ID":"28354a2c-0a0f-4030-964b-b206606ba426","Type":"ContainerStarted","Data":"17a7064fb14b7f606006064dbff594d772de9b4af887b51b21f855de928fc530"} Nov 24 10:12:50 crc kubenswrapper[4719]: I1124 10:12:50.862256 4719 generic.go:334] "Generic (PLEG): container finished" podID="28354a2c-0a0f-4030-964b-b206606ba426" containerID="17a7064fb14b7f606006064dbff594d772de9b4af887b51b21f855de928fc530" exitCode=0 Nov 24 10:12:50 crc kubenswrapper[4719]: I1124 10:12:50.862326 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztc5t" event={"ID":"28354a2c-0a0f-4030-964b-b206606ba426","Type":"ContainerDied","Data":"17a7064fb14b7f606006064dbff594d772de9b4af887b51b21f855de928fc530"} Nov 24 10:12:51 crc kubenswrapper[4719]: I1124 10:12:51.873146 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztc5t" event={"ID":"28354a2c-0a0f-4030-964b-b206606ba426","Type":"ContainerStarted","Data":"9053d25fac99b47d738badac6617d2b0d9c70fb7fcf75a2d7e6810180893eff7"} Nov 24 10:12:51 crc kubenswrapper[4719]: I1124 10:12:51.900235 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ztc5t" podStartSLOduration=3.380105622 podStartE2EDuration="6.900214957s" podCreationTimestamp="2025-11-24 10:12:45 +0000 UTC" firstStartedPulling="2025-11-24 10:12:47.817860575 +0000 UTC m=+4744.149133827" lastFinishedPulling="2025-11-24 10:12:51.3379699 +0000 UTC m=+4747.669243162" observedRunningTime="2025-11-24 10:12:51.897367176 +0000 UTC m=+4748.228640438" watchObservedRunningTime="2025-11-24 10:12:51.900214957 +0000 UTC m=+4748.231488209" Nov 24 10:12:52 crc kubenswrapper[4719]: I1124 10:12:52.104556 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-275k5"] Nov 24 10:12:52 crc kubenswrapper[4719]: I1124 10:12:52.106989 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:12:52 crc kubenswrapper[4719]: I1124 10:12:52.120658 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-275k5"] Nov 24 10:12:52 crc kubenswrapper[4719]: I1124 10:12:52.210537 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vkvk\" (UniqueName: \"kubernetes.io/projected/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-kube-api-access-7vkvk\") pod \"redhat-operators-275k5\" (UID: \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\") " pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:12:52 crc kubenswrapper[4719]: I1124 10:12:52.210618 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-catalog-content\") pod \"redhat-operators-275k5\" (UID: \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\") " pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:12:52 crc kubenswrapper[4719]: I1124 10:12:52.210962 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-utilities\") pod \"redhat-operators-275k5\" (UID: \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\") " pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:12:52 crc kubenswrapper[4719]: I1124 10:12:52.312644 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-utilities\") pod \"redhat-operators-275k5\" (UID: \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\") " pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:12:52 crc kubenswrapper[4719]: I1124 10:12:52.312723 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vkvk\" (UniqueName: \"kubernetes.io/projected/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-kube-api-access-7vkvk\") pod \"redhat-operators-275k5\" (UID: \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\") " pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:12:52 crc kubenswrapper[4719]: I1124 10:12:52.312767 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-catalog-content\") pod \"redhat-operators-275k5\" (UID: \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\") " pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:12:52 crc kubenswrapper[4719]: I1124 10:12:52.313090 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-utilities\") pod \"redhat-operators-275k5\" (UID: \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\") " pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:12:52 crc kubenswrapper[4719]: I1124 10:12:52.313231 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-catalog-content\") pod \"redhat-operators-275k5\" (UID: \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\") " pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:12:52 crc kubenswrapper[4719]: I1124 10:12:52.338376 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vkvk\" (UniqueName: \"kubernetes.io/projected/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-kube-api-access-7vkvk\") pod \"redhat-operators-275k5\" (UID: \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\") " pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:12:52 crc kubenswrapper[4719]: I1124 10:12:52.456588 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:12:52 crc kubenswrapper[4719]: I1124 10:12:52.998866 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-275k5"] Nov 24 10:12:53 crc kubenswrapper[4719]: I1124 10:12:53.893302 4719 generic.go:334] "Generic (PLEG): container finished" podID="ee4d028b-8b69-4ca8-842e-339bbd4f44fb" containerID="bcda0a0fcb6114fc35b3d9a21aa5ba3dcd016b7415a6ba8d239c1676a0b35a12" exitCode=0 Nov 24 10:12:53 crc kubenswrapper[4719]: I1124 10:12:53.893389 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-275k5" event={"ID":"ee4d028b-8b69-4ca8-842e-339bbd4f44fb","Type":"ContainerDied","Data":"bcda0a0fcb6114fc35b3d9a21aa5ba3dcd016b7415a6ba8d239c1676a0b35a12"} Nov 24 10:12:53 crc kubenswrapper[4719]: I1124 10:12:53.893634 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-275k5" event={"ID":"ee4d028b-8b69-4ca8-842e-339bbd4f44fb","Type":"ContainerStarted","Data":"72b179bf7779f0192ba1f3bac41fa2b0a9354c8ac1aca908e0ec3333c6c5e5c1"} Nov 24 10:12:54 crc kubenswrapper[4719]: I1124 10:12:54.904213 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-275k5" event={"ID":"ee4d028b-8b69-4ca8-842e-339bbd4f44fb","Type":"ContainerStarted","Data":"f4f0832969d3883dc1f478b4b2818aabe780cb3419489c2c634a5872f2873d4a"} Nov 24 10:12:56 crc kubenswrapper[4719]: I1124 10:12:56.055298 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:56 crc kubenswrapper[4719]: I1124 10:12:56.055612 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:56 crc kubenswrapper[4719]: I1124 10:12:56.110694 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:57 crc kubenswrapper[4719]: I1124 10:12:57.015139 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:57 crc kubenswrapper[4719]: I1124 10:12:57.299624 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ztc5t"] Nov 24 10:12:58 crc kubenswrapper[4719]: I1124 10:12:58.954460 4719 generic.go:334] "Generic (PLEG): container finished" podID="ee4d028b-8b69-4ca8-842e-339bbd4f44fb" containerID="f4f0832969d3883dc1f478b4b2818aabe780cb3419489c2c634a5872f2873d4a" exitCode=0 Nov 24 10:12:58 crc kubenswrapper[4719]: I1124 10:12:58.954962 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ztc5t" podUID="28354a2c-0a0f-4030-964b-b206606ba426" containerName="registry-server" containerID="cri-o://9053d25fac99b47d738badac6617d2b0d9c70fb7fcf75a2d7e6810180893eff7" gracePeriod=2 Nov 24 10:12:58 crc kubenswrapper[4719]: I1124 10:12:58.955337 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-275k5" event={"ID":"ee4d028b-8b69-4ca8-842e-339bbd4f44fb","Type":"ContainerDied","Data":"f4f0832969d3883dc1f478b4b2818aabe780cb3419489c2c634a5872f2873d4a"} Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.437863 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.579929 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7mw4\" (UniqueName: \"kubernetes.io/projected/28354a2c-0a0f-4030-964b-b206606ba426-kube-api-access-c7mw4\") pod \"28354a2c-0a0f-4030-964b-b206606ba426\" (UID: \"28354a2c-0a0f-4030-964b-b206606ba426\") " Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.580103 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28354a2c-0a0f-4030-964b-b206606ba426-utilities\") pod \"28354a2c-0a0f-4030-964b-b206606ba426\" (UID: \"28354a2c-0a0f-4030-964b-b206606ba426\") " Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.580193 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28354a2c-0a0f-4030-964b-b206606ba426-catalog-content\") pod \"28354a2c-0a0f-4030-964b-b206606ba426\" (UID: \"28354a2c-0a0f-4030-964b-b206606ba426\") " Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.580682 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28354a2c-0a0f-4030-964b-b206606ba426-utilities" (OuterVolumeSpecName: "utilities") pod "28354a2c-0a0f-4030-964b-b206606ba426" (UID: "28354a2c-0a0f-4030-964b-b206606ba426"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.619919 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28354a2c-0a0f-4030-964b-b206606ba426-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28354a2c-0a0f-4030-964b-b206606ba426" (UID: "28354a2c-0a0f-4030-964b-b206606ba426"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.625355 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28354a2c-0a0f-4030-964b-b206606ba426-kube-api-access-c7mw4" (OuterVolumeSpecName: "kube-api-access-c7mw4") pod "28354a2c-0a0f-4030-964b-b206606ba426" (UID: "28354a2c-0a0f-4030-964b-b206606ba426"). InnerVolumeSpecName "kube-api-access-c7mw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.682386 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7mw4\" (UniqueName: \"kubernetes.io/projected/28354a2c-0a0f-4030-964b-b206606ba426-kube-api-access-c7mw4\") on node \"crc\" DevicePath \"\"" Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.682431 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28354a2c-0a0f-4030-964b-b206606ba426-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.682443 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28354a2c-0a0f-4030-964b-b206606ba426-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.966281 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-275k5" event={"ID":"ee4d028b-8b69-4ca8-842e-339bbd4f44fb","Type":"ContainerStarted","Data":"84f8948b1c771a302aab9b6b64bf84d9e5ea3d12f0b1632e3f092cb10f6cb3de"} Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.979404 4719 generic.go:334] "Generic (PLEG): container finished" podID="28354a2c-0a0f-4030-964b-b206606ba426" containerID="9053d25fac99b47d738badac6617d2b0d9c70fb7fcf75a2d7e6810180893eff7" exitCode=0 Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.979450 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztc5t" event={"ID":"28354a2c-0a0f-4030-964b-b206606ba426","Type":"ContainerDied","Data":"9053d25fac99b47d738badac6617d2b0d9c70fb7fcf75a2d7e6810180893eff7"} Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.979482 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztc5t" event={"ID":"28354a2c-0a0f-4030-964b-b206606ba426","Type":"ContainerDied","Data":"f1bef1e5927fff39edda67d023f2bfe7fc51609f38e49bd4747dde442b97a810"} Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.979501 4719 scope.go:117] "RemoveContainer" containerID="9053d25fac99b47d738badac6617d2b0d9c70fb7fcf75a2d7e6810180893eff7" Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.979649 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztc5t" Nov 24 10:12:59 crc kubenswrapper[4719]: I1124 10:12:59.997692 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-275k5" podStartSLOduration=2.130985435 podStartE2EDuration="7.997673269s" podCreationTimestamp="2025-11-24 10:12:52 +0000 UTC" firstStartedPulling="2025-11-24 10:12:53.894979773 +0000 UTC m=+4750.226253025" lastFinishedPulling="2025-11-24 10:12:59.761667607 +0000 UTC m=+4756.092940859" observedRunningTime="2025-11-24 10:12:59.987366285 +0000 UTC m=+4756.318639557" watchObservedRunningTime="2025-11-24 10:12:59.997673269 +0000 UTC m=+4756.328946521" Nov 24 10:13:00 crc kubenswrapper[4719]: I1124 10:13:00.001864 4719 scope.go:117] "RemoveContainer" containerID="17a7064fb14b7f606006064dbff594d772de9b4af887b51b21f855de928fc530" Nov 24 10:13:00 crc kubenswrapper[4719]: I1124 10:13:00.023551 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ztc5t"] Nov 24 10:13:00 crc kubenswrapper[4719]: I1124 10:13:00.028639 4719 scope.go:117] "RemoveContainer" containerID="3c5b9d2078fd6dca4eecb626579fcf3f736f6a49216a3d513207ce7ba2ec5076" Nov 24 10:13:00 crc kubenswrapper[4719]: I1124 10:13:00.035325 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ztc5t"] Nov 24 10:13:00 crc kubenswrapper[4719]: I1124 10:13:00.048301 4719 scope.go:117] "RemoveContainer" containerID="9053d25fac99b47d738badac6617d2b0d9c70fb7fcf75a2d7e6810180893eff7" Nov 24 10:13:00 crc kubenswrapper[4719]: E1124 10:13:00.048731 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9053d25fac99b47d738badac6617d2b0d9c70fb7fcf75a2d7e6810180893eff7\": container with ID starting with 9053d25fac99b47d738badac6617d2b0d9c70fb7fcf75a2d7e6810180893eff7 not found: ID does not exist" containerID="9053d25fac99b47d738badac6617d2b0d9c70fb7fcf75a2d7e6810180893eff7" Nov 24 10:13:00 crc kubenswrapper[4719]: I1124 10:13:00.048764 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9053d25fac99b47d738badac6617d2b0d9c70fb7fcf75a2d7e6810180893eff7"} err="failed to get container status \"9053d25fac99b47d738badac6617d2b0d9c70fb7fcf75a2d7e6810180893eff7\": rpc error: code = NotFound desc = could not find container \"9053d25fac99b47d738badac6617d2b0d9c70fb7fcf75a2d7e6810180893eff7\": container with ID starting with 9053d25fac99b47d738badac6617d2b0d9c70fb7fcf75a2d7e6810180893eff7 not found: ID does not exist" Nov 24 10:13:00 crc kubenswrapper[4719]: I1124 10:13:00.048790 4719 scope.go:117] "RemoveContainer" containerID="17a7064fb14b7f606006064dbff594d772de9b4af887b51b21f855de928fc530" Nov 24 10:13:00 crc kubenswrapper[4719]: E1124 10:13:00.049071 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17a7064fb14b7f606006064dbff594d772de9b4af887b51b21f855de928fc530\": container with ID starting with 17a7064fb14b7f606006064dbff594d772de9b4af887b51b21f855de928fc530 not found: ID does not exist" containerID="17a7064fb14b7f606006064dbff594d772de9b4af887b51b21f855de928fc530" Nov 24 10:13:00 crc kubenswrapper[4719]: I1124 10:13:00.049096 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17a7064fb14b7f606006064dbff594d772de9b4af887b51b21f855de928fc530"} err="failed to get container status \"17a7064fb14b7f606006064dbff594d772de9b4af887b51b21f855de928fc530\": rpc error: code = NotFound desc = could not find container \"17a7064fb14b7f606006064dbff594d772de9b4af887b51b21f855de928fc530\": container with ID starting with 17a7064fb14b7f606006064dbff594d772de9b4af887b51b21f855de928fc530 not found: ID does not exist" Nov 24 10:13:00 crc kubenswrapper[4719]: I1124 10:13:00.049113 4719 scope.go:117] "RemoveContainer" containerID="3c5b9d2078fd6dca4eecb626579fcf3f736f6a49216a3d513207ce7ba2ec5076" Nov 24 10:13:00 crc kubenswrapper[4719]: E1124 10:13:00.049328 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c5b9d2078fd6dca4eecb626579fcf3f736f6a49216a3d513207ce7ba2ec5076\": container with ID starting with 3c5b9d2078fd6dca4eecb626579fcf3f736f6a49216a3d513207ce7ba2ec5076 not found: ID does not exist" containerID="3c5b9d2078fd6dca4eecb626579fcf3f736f6a49216a3d513207ce7ba2ec5076" Nov 24 10:13:00 crc kubenswrapper[4719]: I1124 10:13:00.049354 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5b9d2078fd6dca4eecb626579fcf3f736f6a49216a3d513207ce7ba2ec5076"} err="failed to get container status \"3c5b9d2078fd6dca4eecb626579fcf3f736f6a49216a3d513207ce7ba2ec5076\": rpc error: code = NotFound desc = could not find container \"3c5b9d2078fd6dca4eecb626579fcf3f736f6a49216a3d513207ce7ba2ec5076\": container with ID starting with 3c5b9d2078fd6dca4eecb626579fcf3f736f6a49216a3d513207ce7ba2ec5076 not found: ID does not exist" Nov 24 10:13:00 crc kubenswrapper[4719]: I1124 10:13:00.532273 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28354a2c-0a0f-4030-964b-b206606ba426" path="/var/lib/kubelet/pods/28354a2c-0a0f-4030-964b-b206606ba426/volumes" Nov 24 10:13:02 crc kubenswrapper[4719]: I1124 10:13:02.456805 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:13:02 crc kubenswrapper[4719]: I1124 10:13:02.457190 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:13:03 crc kubenswrapper[4719]: I1124 10:13:03.519464 4719 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-275k5" podUID="ee4d028b-8b69-4ca8-842e-339bbd4f44fb" containerName="registry-server" probeResult="failure" output=< Nov 24 10:13:03 crc kubenswrapper[4719]: timeout: failed to connect service ":50051" within 1s Nov 24 10:13:03 crc kubenswrapper[4719]: > Nov 24 10:13:12 crc kubenswrapper[4719]: I1124 10:13:12.502677 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:13:12 crc kubenswrapper[4719]: I1124 10:13:12.556362 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:13:12 crc kubenswrapper[4719]: I1124 10:13:12.752988 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-275k5"] Nov 24 10:13:14 crc kubenswrapper[4719]: I1124 10:13:14.108914 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-275k5" podUID="ee4d028b-8b69-4ca8-842e-339bbd4f44fb" containerName="registry-server" containerID="cri-o://84f8948b1c771a302aab9b6b64bf84d9e5ea3d12f0b1632e3f092cb10f6cb3de" gracePeriod=2 Nov 24 10:13:14 crc kubenswrapper[4719]: I1124 10:13:14.581160 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:13:14 crc kubenswrapper[4719]: I1124 10:13:14.763424 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-utilities\") pod \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\" (UID: \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\") " Nov 24 10:13:14 crc kubenswrapper[4719]: I1124 10:13:14.763478 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vkvk\" (UniqueName: \"kubernetes.io/projected/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-kube-api-access-7vkvk\") pod \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\" (UID: \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\") " Nov 24 10:13:14 crc kubenswrapper[4719]: I1124 10:13:14.764435 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-utilities" (OuterVolumeSpecName: "utilities") pod "ee4d028b-8b69-4ca8-842e-339bbd4f44fb" (UID: "ee4d028b-8b69-4ca8-842e-339bbd4f44fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:13:14 crc kubenswrapper[4719]: I1124 10:13:14.765780 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-catalog-content\") pod \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\" (UID: \"ee4d028b-8b69-4ca8-842e-339bbd4f44fb\") " Nov 24 10:13:14 crc kubenswrapper[4719]: I1124 10:13:14.766613 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 10:13:14 crc kubenswrapper[4719]: I1124 10:13:14.770815 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-kube-api-access-7vkvk" (OuterVolumeSpecName: "kube-api-access-7vkvk") pod "ee4d028b-8b69-4ca8-842e-339bbd4f44fb" (UID: "ee4d028b-8b69-4ca8-842e-339bbd4f44fb"). InnerVolumeSpecName "kube-api-access-7vkvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:13:14 crc kubenswrapper[4719]: I1124 10:13:14.869177 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vkvk\" (UniqueName: \"kubernetes.io/projected/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-kube-api-access-7vkvk\") on node \"crc\" DevicePath \"\"" Nov 24 10:13:14 crc kubenswrapper[4719]: I1124 10:13:14.895131 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee4d028b-8b69-4ca8-842e-339bbd4f44fb" (UID: "ee4d028b-8b69-4ca8-842e-339bbd4f44fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:13:14 crc kubenswrapper[4719]: I1124 10:13:14.971784 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee4d028b-8b69-4ca8-842e-339bbd4f44fb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.120914 4719 generic.go:334] "Generic (PLEG): container finished" podID="ee4d028b-8b69-4ca8-842e-339bbd4f44fb" containerID="84f8948b1c771a302aab9b6b64bf84d9e5ea3d12f0b1632e3f092cb10f6cb3de" exitCode=0 Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.120980 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-275k5" event={"ID":"ee4d028b-8b69-4ca8-842e-339bbd4f44fb","Type":"ContainerDied","Data":"84f8948b1c771a302aab9b6b64bf84d9e5ea3d12f0b1632e3f092cb10f6cb3de"} Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.121025 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-275k5" event={"ID":"ee4d028b-8b69-4ca8-842e-339bbd4f44fb","Type":"ContainerDied","Data":"72b179bf7779f0192ba1f3bac41fa2b0a9354c8ac1aca908e0ec3333c6c5e5c1"} Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.121063 4719 scope.go:117] "RemoveContainer" containerID="84f8948b1c771a302aab9b6b64bf84d9e5ea3d12f0b1632e3f092cb10f6cb3de" Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.121292 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-275k5" Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.157352 4719 scope.go:117] "RemoveContainer" containerID="f4f0832969d3883dc1f478b4b2818aabe780cb3419489c2c634a5872f2873d4a" Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.174124 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-275k5"] Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.188556 4719 scope.go:117] "RemoveContainer" containerID="bcda0a0fcb6114fc35b3d9a21aa5ba3dcd016b7415a6ba8d239c1676a0b35a12" Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.189146 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-275k5"] Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.230729 4719 scope.go:117] "RemoveContainer" containerID="84f8948b1c771a302aab9b6b64bf84d9e5ea3d12f0b1632e3f092cb10f6cb3de" Nov 24 10:13:15 crc kubenswrapper[4719]: E1124 10:13:15.232009 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84f8948b1c771a302aab9b6b64bf84d9e5ea3d12f0b1632e3f092cb10f6cb3de\": container with ID starting with 84f8948b1c771a302aab9b6b64bf84d9e5ea3d12f0b1632e3f092cb10f6cb3de not found: ID does not exist" containerID="84f8948b1c771a302aab9b6b64bf84d9e5ea3d12f0b1632e3f092cb10f6cb3de" Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.232059 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84f8948b1c771a302aab9b6b64bf84d9e5ea3d12f0b1632e3f092cb10f6cb3de"} err="failed to get container status \"84f8948b1c771a302aab9b6b64bf84d9e5ea3d12f0b1632e3f092cb10f6cb3de\": rpc error: code = NotFound desc = could not find container \"84f8948b1c771a302aab9b6b64bf84d9e5ea3d12f0b1632e3f092cb10f6cb3de\": container with ID starting with 84f8948b1c771a302aab9b6b64bf84d9e5ea3d12f0b1632e3f092cb10f6cb3de not found: ID does not exist" Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.232090 4719 scope.go:117] "RemoveContainer" containerID="f4f0832969d3883dc1f478b4b2818aabe780cb3419489c2c634a5872f2873d4a" Nov 24 10:13:15 crc kubenswrapper[4719]: E1124 10:13:15.232402 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4f0832969d3883dc1f478b4b2818aabe780cb3419489c2c634a5872f2873d4a\": container with ID starting with f4f0832969d3883dc1f478b4b2818aabe780cb3419489c2c634a5872f2873d4a not found: ID does not exist" containerID="f4f0832969d3883dc1f478b4b2818aabe780cb3419489c2c634a5872f2873d4a" Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.232443 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4f0832969d3883dc1f478b4b2818aabe780cb3419489c2c634a5872f2873d4a"} err="failed to get container status \"f4f0832969d3883dc1f478b4b2818aabe780cb3419489c2c634a5872f2873d4a\": rpc error: code = NotFound desc = could not find container \"f4f0832969d3883dc1f478b4b2818aabe780cb3419489c2c634a5872f2873d4a\": container with ID starting with f4f0832969d3883dc1f478b4b2818aabe780cb3419489c2c634a5872f2873d4a not found: ID does not exist" Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.232474 4719 scope.go:117] "RemoveContainer" containerID="bcda0a0fcb6114fc35b3d9a21aa5ba3dcd016b7415a6ba8d239c1676a0b35a12" Nov 24 10:13:15 crc kubenswrapper[4719]: E1124 10:13:15.232751 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcda0a0fcb6114fc35b3d9a21aa5ba3dcd016b7415a6ba8d239c1676a0b35a12\": container with ID starting with bcda0a0fcb6114fc35b3d9a21aa5ba3dcd016b7415a6ba8d239c1676a0b35a12 not found: ID does not exist" containerID="bcda0a0fcb6114fc35b3d9a21aa5ba3dcd016b7415a6ba8d239c1676a0b35a12" Nov 24 10:13:15 crc kubenswrapper[4719]: I1124 10:13:15.232782 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcda0a0fcb6114fc35b3d9a21aa5ba3dcd016b7415a6ba8d239c1676a0b35a12"} err="failed to get container status \"bcda0a0fcb6114fc35b3d9a21aa5ba3dcd016b7415a6ba8d239c1676a0b35a12\": rpc error: code = NotFound desc = could not find container \"bcda0a0fcb6114fc35b3d9a21aa5ba3dcd016b7415a6ba8d239c1676a0b35a12\": container with ID starting with bcda0a0fcb6114fc35b3d9a21aa5ba3dcd016b7415a6ba8d239c1676a0b35a12 not found: ID does not exist" Nov 24 10:13:16 crc kubenswrapper[4719]: I1124 10:13:16.530103 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee4d028b-8b69-4ca8-842e-339bbd4f44fb" path="/var/lib/kubelet/pods/ee4d028b-8b69-4ca8-842e-339bbd4f44fb/volumes" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.306583 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9rl7n"] Nov 24 10:13:52 crc kubenswrapper[4719]: E1124 10:13:52.307545 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28354a2c-0a0f-4030-964b-b206606ba426" containerName="extract-content" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.307560 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="28354a2c-0a0f-4030-964b-b206606ba426" containerName="extract-content" Nov 24 10:13:52 crc kubenswrapper[4719]: E1124 10:13:52.307570 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4d028b-8b69-4ca8-842e-339bbd4f44fb" containerName="extract-content" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.307576 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4d028b-8b69-4ca8-842e-339bbd4f44fb" containerName="extract-content" Nov 24 10:13:52 crc kubenswrapper[4719]: E1124 10:13:52.307587 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28354a2c-0a0f-4030-964b-b206606ba426" containerName="registry-server" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.307593 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="28354a2c-0a0f-4030-964b-b206606ba426" containerName="registry-server" Nov 24 10:13:52 crc kubenswrapper[4719]: E1124 10:13:52.307600 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4d028b-8b69-4ca8-842e-339bbd4f44fb" containerName="extract-utilities" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.307606 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4d028b-8b69-4ca8-842e-339bbd4f44fb" containerName="extract-utilities" Nov 24 10:13:52 crc kubenswrapper[4719]: E1124 10:13:52.307629 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4d028b-8b69-4ca8-842e-339bbd4f44fb" containerName="registry-server" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.307635 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4d028b-8b69-4ca8-842e-339bbd4f44fb" containerName="registry-server" Nov 24 10:13:52 crc kubenswrapper[4719]: E1124 10:13:52.307650 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28354a2c-0a0f-4030-964b-b206606ba426" containerName="extract-utilities" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.307657 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="28354a2c-0a0f-4030-964b-b206606ba426" containerName="extract-utilities" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.307828 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4d028b-8b69-4ca8-842e-339bbd4f44fb" containerName="registry-server" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.307848 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="28354a2c-0a0f-4030-964b-b206606ba426" containerName="registry-server" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.309836 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.341308 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9rl7n"] Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.363350 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14282860-925a-4bec-b2a3-86b35f1b04b8-catalog-content\") pod \"community-operators-9rl7n\" (UID: \"14282860-925a-4bec-b2a3-86b35f1b04b8\") " pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.363481 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c7m4\" (UniqueName: \"kubernetes.io/projected/14282860-925a-4bec-b2a3-86b35f1b04b8-kube-api-access-7c7m4\") pod \"community-operators-9rl7n\" (UID: \"14282860-925a-4bec-b2a3-86b35f1b04b8\") " pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.363536 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14282860-925a-4bec-b2a3-86b35f1b04b8-utilities\") pod \"community-operators-9rl7n\" (UID: \"14282860-925a-4bec-b2a3-86b35f1b04b8\") " pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.465264 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14282860-925a-4bec-b2a3-86b35f1b04b8-catalog-content\") pod \"community-operators-9rl7n\" (UID: \"14282860-925a-4bec-b2a3-86b35f1b04b8\") " pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.465362 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c7m4\" (UniqueName: \"kubernetes.io/projected/14282860-925a-4bec-b2a3-86b35f1b04b8-kube-api-access-7c7m4\") pod \"community-operators-9rl7n\" (UID: \"14282860-925a-4bec-b2a3-86b35f1b04b8\") " pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.465410 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14282860-925a-4bec-b2a3-86b35f1b04b8-utilities\") pod \"community-operators-9rl7n\" (UID: \"14282860-925a-4bec-b2a3-86b35f1b04b8\") " pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.465850 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14282860-925a-4bec-b2a3-86b35f1b04b8-catalog-content\") pod \"community-operators-9rl7n\" (UID: \"14282860-925a-4bec-b2a3-86b35f1b04b8\") " pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.465930 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14282860-925a-4bec-b2a3-86b35f1b04b8-utilities\") pod \"community-operators-9rl7n\" (UID: \"14282860-925a-4bec-b2a3-86b35f1b04b8\") " pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.489080 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c7m4\" (UniqueName: \"kubernetes.io/projected/14282860-925a-4bec-b2a3-86b35f1b04b8-kube-api-access-7c7m4\") pod \"community-operators-9rl7n\" (UID: \"14282860-925a-4bec-b2a3-86b35f1b04b8\") " pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:13:52 crc kubenswrapper[4719]: I1124 10:13:52.637223 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:13:53 crc kubenswrapper[4719]: I1124 10:13:53.269283 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9rl7n"] Nov 24 10:13:53 crc kubenswrapper[4719]: I1124 10:13:53.529203 4719 generic.go:334] "Generic (PLEG): container finished" podID="14282860-925a-4bec-b2a3-86b35f1b04b8" containerID="d7808e095b06f289d67ed190c24400ab820ba64605716321487ca6c05de3e586" exitCode=0 Nov 24 10:13:53 crc kubenswrapper[4719]: I1124 10:13:53.529429 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rl7n" event={"ID":"14282860-925a-4bec-b2a3-86b35f1b04b8","Type":"ContainerDied","Data":"d7808e095b06f289d67ed190c24400ab820ba64605716321487ca6c05de3e586"} Nov 24 10:13:53 crc kubenswrapper[4719]: I1124 10:13:53.529499 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rl7n" event={"ID":"14282860-925a-4bec-b2a3-86b35f1b04b8","Type":"ContainerStarted","Data":"fa1b3aeb2f6072022d7c4269f5dfb0cb1e2c0e177a44b42bd63d0561b9e22f1f"} Nov 24 10:13:55 crc kubenswrapper[4719]: I1124 10:13:55.560621 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rl7n" event={"ID":"14282860-925a-4bec-b2a3-86b35f1b04b8","Type":"ContainerStarted","Data":"4790a34e5ccc4909580688b3798bf521e24a8e9e9ee86da73de3076d24a5e97d"} Nov 24 10:13:56 crc kubenswrapper[4719]: I1124 10:13:56.571459 4719 generic.go:334] "Generic (PLEG): container finished" podID="14282860-925a-4bec-b2a3-86b35f1b04b8" containerID="4790a34e5ccc4909580688b3798bf521e24a8e9e9ee86da73de3076d24a5e97d" exitCode=0 Nov 24 10:13:56 crc kubenswrapper[4719]: I1124 10:13:56.571514 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rl7n" event={"ID":"14282860-925a-4bec-b2a3-86b35f1b04b8","Type":"ContainerDied","Data":"4790a34e5ccc4909580688b3798bf521e24a8e9e9ee86da73de3076d24a5e97d"} Nov 24 10:13:57 crc kubenswrapper[4719]: I1124 10:13:57.581874 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rl7n" event={"ID":"14282860-925a-4bec-b2a3-86b35f1b04b8","Type":"ContainerStarted","Data":"2e87478abaa1006f911e04c33593e299a3a34b57a1ab3573ee05452e62bea5e8"} Nov 24 10:13:57 crc kubenswrapper[4719]: I1124 10:13:57.604227 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9rl7n" podStartSLOduration=2.119381069 podStartE2EDuration="5.604210018s" podCreationTimestamp="2025-11-24 10:13:52 +0000 UTC" firstStartedPulling="2025-11-24 10:13:53.531499161 +0000 UTC m=+4809.862772453" lastFinishedPulling="2025-11-24 10:13:57.01632815 +0000 UTC m=+4813.347601402" observedRunningTime="2025-11-24 10:13:57.597779445 +0000 UTC m=+4813.929052707" watchObservedRunningTime="2025-11-24 10:13:57.604210018 +0000 UTC m=+4813.935483280" Nov 24 10:14:02 crc kubenswrapper[4719]: I1124 10:14:02.638467 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:14:02 crc kubenswrapper[4719]: I1124 10:14:02.638952 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:14:02 crc kubenswrapper[4719]: I1124 10:14:02.696248 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:14:03 crc kubenswrapper[4719]: I1124 10:14:03.692262 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:14:03 crc kubenswrapper[4719]: I1124 10:14:03.750593 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9rl7n"] Nov 24 10:14:05 crc kubenswrapper[4719]: I1124 10:14:05.649888 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9rl7n" podUID="14282860-925a-4bec-b2a3-86b35f1b04b8" containerName="registry-server" containerID="cri-o://2e87478abaa1006f911e04c33593e299a3a34b57a1ab3573ee05452e62bea5e8" gracePeriod=2 Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.102444 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.234726 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14282860-925a-4bec-b2a3-86b35f1b04b8-catalog-content\") pod \"14282860-925a-4bec-b2a3-86b35f1b04b8\" (UID: \"14282860-925a-4bec-b2a3-86b35f1b04b8\") " Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.234817 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c7m4\" (UniqueName: \"kubernetes.io/projected/14282860-925a-4bec-b2a3-86b35f1b04b8-kube-api-access-7c7m4\") pod \"14282860-925a-4bec-b2a3-86b35f1b04b8\" (UID: \"14282860-925a-4bec-b2a3-86b35f1b04b8\") " Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.234910 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14282860-925a-4bec-b2a3-86b35f1b04b8-utilities\") pod \"14282860-925a-4bec-b2a3-86b35f1b04b8\" (UID: \"14282860-925a-4bec-b2a3-86b35f1b04b8\") " Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.236117 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14282860-925a-4bec-b2a3-86b35f1b04b8-utilities" (OuterVolumeSpecName: "utilities") pod "14282860-925a-4bec-b2a3-86b35f1b04b8" (UID: "14282860-925a-4bec-b2a3-86b35f1b04b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.248577 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14282860-925a-4bec-b2a3-86b35f1b04b8-kube-api-access-7c7m4" (OuterVolumeSpecName: "kube-api-access-7c7m4") pod "14282860-925a-4bec-b2a3-86b35f1b04b8" (UID: "14282860-925a-4bec-b2a3-86b35f1b04b8"). InnerVolumeSpecName "kube-api-access-7c7m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.301268 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14282860-925a-4bec-b2a3-86b35f1b04b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14282860-925a-4bec-b2a3-86b35f1b04b8" (UID: "14282860-925a-4bec-b2a3-86b35f1b04b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.338940 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14282860-925a-4bec-b2a3-86b35f1b04b8-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.338974 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14282860-925a-4bec-b2a3-86b35f1b04b8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.339009 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c7m4\" (UniqueName: \"kubernetes.io/projected/14282860-925a-4bec-b2a3-86b35f1b04b8-kube-api-access-7c7m4\") on node \"crc\" DevicePath \"\"" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.661113 4719 generic.go:334] "Generic (PLEG): container finished" podID="14282860-925a-4bec-b2a3-86b35f1b04b8" containerID="2e87478abaa1006f911e04c33593e299a3a34b57a1ab3573ee05452e62bea5e8" exitCode=0 Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.661152 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rl7n" event={"ID":"14282860-925a-4bec-b2a3-86b35f1b04b8","Type":"ContainerDied","Data":"2e87478abaa1006f911e04c33593e299a3a34b57a1ab3573ee05452e62bea5e8"} Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.661177 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rl7n" event={"ID":"14282860-925a-4bec-b2a3-86b35f1b04b8","Type":"ContainerDied","Data":"fa1b3aeb2f6072022d7c4269f5dfb0cb1e2c0e177a44b42bd63d0561b9e22f1f"} Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.661193 4719 scope.go:117] "RemoveContainer" containerID="2e87478abaa1006f911e04c33593e299a3a34b57a1ab3573ee05452e62bea5e8" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.661296 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rl7n" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.687897 4719 scope.go:117] "RemoveContainer" containerID="4790a34e5ccc4909580688b3798bf521e24a8e9e9ee86da73de3076d24a5e97d" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.689158 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9rl7n"] Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.700580 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9rl7n"] Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.820072 4719 scope.go:117] "RemoveContainer" containerID="d7808e095b06f289d67ed190c24400ab820ba64605716321487ca6c05de3e586" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.858902 4719 scope.go:117] "RemoveContainer" containerID="2e87478abaa1006f911e04c33593e299a3a34b57a1ab3573ee05452e62bea5e8" Nov 24 10:14:06 crc kubenswrapper[4719]: E1124 10:14:06.859809 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e87478abaa1006f911e04c33593e299a3a34b57a1ab3573ee05452e62bea5e8\": container with ID starting with 2e87478abaa1006f911e04c33593e299a3a34b57a1ab3573ee05452e62bea5e8 not found: ID does not exist" containerID="2e87478abaa1006f911e04c33593e299a3a34b57a1ab3573ee05452e62bea5e8" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.859878 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e87478abaa1006f911e04c33593e299a3a34b57a1ab3573ee05452e62bea5e8"} err="failed to get container status \"2e87478abaa1006f911e04c33593e299a3a34b57a1ab3573ee05452e62bea5e8\": rpc error: code = NotFound desc = could not find container \"2e87478abaa1006f911e04c33593e299a3a34b57a1ab3573ee05452e62bea5e8\": container with ID starting with 2e87478abaa1006f911e04c33593e299a3a34b57a1ab3573ee05452e62bea5e8 not found: ID does not exist" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.860180 4719 scope.go:117] "RemoveContainer" containerID="4790a34e5ccc4909580688b3798bf521e24a8e9e9ee86da73de3076d24a5e97d" Nov 24 10:14:06 crc kubenswrapper[4719]: E1124 10:14:06.860644 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4790a34e5ccc4909580688b3798bf521e24a8e9e9ee86da73de3076d24a5e97d\": container with ID starting with 4790a34e5ccc4909580688b3798bf521e24a8e9e9ee86da73de3076d24a5e97d not found: ID does not exist" containerID="4790a34e5ccc4909580688b3798bf521e24a8e9e9ee86da73de3076d24a5e97d" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.860674 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4790a34e5ccc4909580688b3798bf521e24a8e9e9ee86da73de3076d24a5e97d"} err="failed to get container status \"4790a34e5ccc4909580688b3798bf521e24a8e9e9ee86da73de3076d24a5e97d\": rpc error: code = NotFound desc = could not find container \"4790a34e5ccc4909580688b3798bf521e24a8e9e9ee86da73de3076d24a5e97d\": container with ID starting with 4790a34e5ccc4909580688b3798bf521e24a8e9e9ee86da73de3076d24a5e97d not found: ID does not exist" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.860721 4719 scope.go:117] "RemoveContainer" containerID="d7808e095b06f289d67ed190c24400ab820ba64605716321487ca6c05de3e586" Nov 24 10:14:06 crc kubenswrapper[4719]: E1124 10:14:06.861681 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7808e095b06f289d67ed190c24400ab820ba64605716321487ca6c05de3e586\": container with ID starting with d7808e095b06f289d67ed190c24400ab820ba64605716321487ca6c05de3e586 not found: ID does not exist" containerID="d7808e095b06f289d67ed190c24400ab820ba64605716321487ca6c05de3e586" Nov 24 10:14:06 crc kubenswrapper[4719]: I1124 10:14:06.861710 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7808e095b06f289d67ed190c24400ab820ba64605716321487ca6c05de3e586"} err="failed to get container status \"d7808e095b06f289d67ed190c24400ab820ba64605716321487ca6c05de3e586\": rpc error: code = NotFound desc = could not find container \"d7808e095b06f289d67ed190c24400ab820ba64605716321487ca6c05de3e586\": container with ID starting with d7808e095b06f289d67ed190c24400ab820ba64605716321487ca6c05de3e586 not found: ID does not exist" Nov 24 10:14:08 crc kubenswrapper[4719]: I1124 10:14:08.535721 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14282860-925a-4bec-b2a3-86b35f1b04b8" path="/var/lib/kubelet/pods/14282860-925a-4bec-b2a3-86b35f1b04b8/volumes" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.157726 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj"] Nov 24 10:15:00 crc kubenswrapper[4719]: E1124 10:15:00.158714 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14282860-925a-4bec-b2a3-86b35f1b04b8" containerName="extract-utilities" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.158729 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="14282860-925a-4bec-b2a3-86b35f1b04b8" containerName="extract-utilities" Nov 24 10:15:00 crc kubenswrapper[4719]: E1124 10:15:00.158768 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14282860-925a-4bec-b2a3-86b35f1b04b8" containerName="registry-server" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.158777 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="14282860-925a-4bec-b2a3-86b35f1b04b8" containerName="registry-server" Nov 24 10:15:00 crc kubenswrapper[4719]: E1124 10:15:00.158796 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14282860-925a-4bec-b2a3-86b35f1b04b8" containerName="extract-content" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.158803 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="14282860-925a-4bec-b2a3-86b35f1b04b8" containerName="extract-content" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.159008 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="14282860-925a-4bec-b2a3-86b35f1b04b8" containerName="registry-server" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.159749 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.161592 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.162163 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.178479 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj"] Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.306678 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-secret-volume\") pod \"collect-profiles-29399655-l2ngj\" (UID: \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.306750 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r598\" (UniqueName: \"kubernetes.io/projected/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-kube-api-access-5r598\") pod \"collect-profiles-29399655-l2ngj\" (UID: \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.306811 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-config-volume\") pod \"collect-profiles-29399655-l2ngj\" (UID: \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.408683 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-secret-volume\") pod \"collect-profiles-29399655-l2ngj\" (UID: \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.408995 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r598\" (UniqueName: \"kubernetes.io/projected/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-kube-api-access-5r598\") pod \"collect-profiles-29399655-l2ngj\" (UID: \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.409075 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-config-volume\") pod \"collect-profiles-29399655-l2ngj\" (UID: \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.410603 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-config-volume\") pod \"collect-profiles-29399655-l2ngj\" (UID: \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.415754 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-secret-volume\") pod \"collect-profiles-29399655-l2ngj\" (UID: \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.426540 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r598\" (UniqueName: \"kubernetes.io/projected/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-kube-api-access-5r598\") pod \"collect-profiles-29399655-l2ngj\" (UID: \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.498425 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" Nov 24 10:15:00 crc kubenswrapper[4719]: I1124 10:15:00.956348 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj"] Nov 24 10:15:01 crc kubenswrapper[4719]: I1124 10:15:01.206590 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" event={"ID":"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0","Type":"ContainerStarted","Data":"9cf99bc239717e8933a0a6fe4ebce862e8bfa76c101b8214f22b75ee7ed3da95"} Nov 24 10:15:01 crc kubenswrapper[4719]: I1124 10:15:01.208111 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" event={"ID":"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0","Type":"ContainerStarted","Data":"96f8b3b9d770d9ce22f47d548a63e86cb5e232c8e1949275dfeacb11940aa11d"} Nov 24 10:15:01 crc kubenswrapper[4719]: I1124 10:15:01.229483 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" podStartSLOduration=1.229462684 podStartE2EDuration="1.229462684s" podCreationTimestamp="2025-11-24 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 10:15:01.223458033 +0000 UTC m=+4877.554731325" watchObservedRunningTime="2025-11-24 10:15:01.229462684 +0000 UTC m=+4877.560735946" Nov 24 10:15:02 crc kubenswrapper[4719]: I1124 10:15:02.215720 4719 generic.go:334] "Generic (PLEG): container finished" podID="5d1cd13e-8b25-453a-93e1-0bd4ed2098f0" containerID="9cf99bc239717e8933a0a6fe4ebce862e8bfa76c101b8214f22b75ee7ed3da95" exitCode=0 Nov 24 10:15:02 crc kubenswrapper[4719]: I1124 10:15:02.215762 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" event={"ID":"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0","Type":"ContainerDied","Data":"9cf99bc239717e8933a0a6fe4ebce862e8bfa76c101b8214f22b75ee7ed3da95"} Nov 24 10:15:03 crc kubenswrapper[4719]: I1124 10:15:03.793709 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" Nov 24 10:15:03 crc kubenswrapper[4719]: I1124 10:15:03.839643 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-secret-volume\") pod \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\" (UID: \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\") " Nov 24 10:15:03 crc kubenswrapper[4719]: I1124 10:15:03.839791 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-config-volume\") pod \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\" (UID: \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\") " Nov 24 10:15:03 crc kubenswrapper[4719]: I1124 10:15:03.839878 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r598\" (UniqueName: \"kubernetes.io/projected/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-kube-api-access-5r598\") pod \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\" (UID: \"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0\") " Nov 24 10:15:03 crc kubenswrapper[4719]: I1124 10:15:03.840685 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-config-volume" (OuterVolumeSpecName: "config-volume") pod "5d1cd13e-8b25-453a-93e1-0bd4ed2098f0" (UID: "5d1cd13e-8b25-453a-93e1-0bd4ed2098f0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 10:15:03 crc kubenswrapper[4719]: I1124 10:15:03.853418 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-kube-api-access-5r598" (OuterVolumeSpecName: "kube-api-access-5r598") pod "5d1cd13e-8b25-453a-93e1-0bd4ed2098f0" (UID: "5d1cd13e-8b25-453a-93e1-0bd4ed2098f0"). InnerVolumeSpecName "kube-api-access-5r598". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:15:03 crc kubenswrapper[4719]: I1124 10:15:03.854240 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5d1cd13e-8b25-453a-93e1-0bd4ed2098f0" (UID: "5d1cd13e-8b25-453a-93e1-0bd4ed2098f0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 10:15:03 crc kubenswrapper[4719]: I1124 10:15:03.942569 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r598\" (UniqueName: \"kubernetes.io/projected/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-kube-api-access-5r598\") on node \"crc\" DevicePath \"\"" Nov 24 10:15:03 crc kubenswrapper[4719]: I1124 10:15:03.942606 4719 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 10:15:03 crc kubenswrapper[4719]: I1124 10:15:03.942617 4719 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d1cd13e-8b25-453a-93e1-0bd4ed2098f0-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 10:15:04 crc kubenswrapper[4719]: I1124 10:15:04.432796 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" event={"ID":"5d1cd13e-8b25-453a-93e1-0bd4ed2098f0","Type":"ContainerDied","Data":"96f8b3b9d770d9ce22f47d548a63e86cb5e232c8e1949275dfeacb11940aa11d"} Nov 24 10:15:04 crc kubenswrapper[4719]: I1124 10:15:04.433184 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96f8b3b9d770d9ce22f47d548a63e86cb5e232c8e1949275dfeacb11940aa11d" Nov 24 10:15:04 crc kubenswrapper[4719]: I1124 10:15:04.433275 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29399655-l2ngj" Nov 24 10:15:04 crc kubenswrapper[4719]: I1124 10:15:04.492975 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv"] Nov 24 10:15:04 crc kubenswrapper[4719]: I1124 10:15:04.501973 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29399610-rjhbv"] Nov 24 10:15:04 crc kubenswrapper[4719]: I1124 10:15:04.546403 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c32a80c-2ba9-4afc-9e04-6bec58abaa4e" path="/var/lib/kubelet/pods/0c32a80c-2ba9-4afc-9e04-6bec58abaa4e/volumes" Nov 24 10:15:04 crc kubenswrapper[4719]: I1124 10:15:04.561842 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:15:04 crc kubenswrapper[4719]: I1124 10:15:04.562188 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:15:09 crc kubenswrapper[4719]: I1124 10:15:09.588806 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vkqb2/must-gather-g872x"] Nov 24 10:15:09 crc kubenswrapper[4719]: E1124 10:15:09.591682 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1cd13e-8b25-453a-93e1-0bd4ed2098f0" containerName="collect-profiles" Nov 24 10:15:09 crc kubenswrapper[4719]: I1124 10:15:09.591780 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1cd13e-8b25-453a-93e1-0bd4ed2098f0" containerName="collect-profiles" Nov 24 10:15:09 crc kubenswrapper[4719]: I1124 10:15:09.592127 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1cd13e-8b25-453a-93e1-0bd4ed2098f0" containerName="collect-profiles" Nov 24 10:15:09 crc kubenswrapper[4719]: I1124 10:15:09.593507 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/must-gather-g872x" Nov 24 10:15:09 crc kubenswrapper[4719]: I1124 10:15:09.611073 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vkqb2"/"openshift-service-ca.crt" Nov 24 10:15:09 crc kubenswrapper[4719]: I1124 10:15:09.614267 4719 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vkqb2"/"kube-root-ca.crt" Nov 24 10:15:09 crc kubenswrapper[4719]: I1124 10:15:09.614418 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vkqb2/must-gather-g872x"] Nov 24 10:15:09 crc kubenswrapper[4719]: I1124 10:15:09.654171 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvrbk\" (UniqueName: \"kubernetes.io/projected/9572f9fd-5e52-4924-87c5-b85c9c81fc2e-kube-api-access-nvrbk\") pod \"must-gather-g872x\" (UID: \"9572f9fd-5e52-4924-87c5-b85c9c81fc2e\") " pod="openshift-must-gather-vkqb2/must-gather-g872x" Nov 24 10:15:09 crc kubenswrapper[4719]: I1124 10:15:09.654261 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9572f9fd-5e52-4924-87c5-b85c9c81fc2e-must-gather-output\") pod \"must-gather-g872x\" (UID: \"9572f9fd-5e52-4924-87c5-b85c9c81fc2e\") " pod="openshift-must-gather-vkqb2/must-gather-g872x" Nov 24 10:15:09 crc kubenswrapper[4719]: I1124 10:15:09.764015 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvrbk\" (UniqueName: \"kubernetes.io/projected/9572f9fd-5e52-4924-87c5-b85c9c81fc2e-kube-api-access-nvrbk\") pod \"must-gather-g872x\" (UID: \"9572f9fd-5e52-4924-87c5-b85c9c81fc2e\") " pod="openshift-must-gather-vkqb2/must-gather-g872x" Nov 24 10:15:09 crc kubenswrapper[4719]: I1124 10:15:09.765208 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9572f9fd-5e52-4924-87c5-b85c9c81fc2e-must-gather-output\") pod \"must-gather-g872x\" (UID: \"9572f9fd-5e52-4924-87c5-b85c9c81fc2e\") " pod="openshift-must-gather-vkqb2/must-gather-g872x" Nov 24 10:15:09 crc kubenswrapper[4719]: I1124 10:15:09.766238 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9572f9fd-5e52-4924-87c5-b85c9c81fc2e-must-gather-output\") pod \"must-gather-g872x\" (UID: \"9572f9fd-5e52-4924-87c5-b85c9c81fc2e\") " pod="openshift-must-gather-vkqb2/must-gather-g872x" Nov 24 10:15:09 crc kubenswrapper[4719]: I1124 10:15:09.799915 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvrbk\" (UniqueName: \"kubernetes.io/projected/9572f9fd-5e52-4924-87c5-b85c9c81fc2e-kube-api-access-nvrbk\") pod \"must-gather-g872x\" (UID: \"9572f9fd-5e52-4924-87c5-b85c9c81fc2e\") " pod="openshift-must-gather-vkqb2/must-gather-g872x" Nov 24 10:15:09 crc kubenswrapper[4719]: I1124 10:15:09.922853 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/must-gather-g872x" Nov 24 10:15:10 crc kubenswrapper[4719]: I1124 10:15:10.466483 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vkqb2/must-gather-g872x"] Nov 24 10:15:10 crc kubenswrapper[4719]: I1124 10:15:10.507178 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkqb2/must-gather-g872x" event={"ID":"9572f9fd-5e52-4924-87c5-b85c9c81fc2e","Type":"ContainerStarted","Data":"82c48b8ae2083c1b0240a2271ca55664c4ed393d992635556006dcf6096c1847"} Nov 24 10:15:11 crc kubenswrapper[4719]: I1124 10:15:11.516756 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkqb2/must-gather-g872x" event={"ID":"9572f9fd-5e52-4924-87c5-b85c9c81fc2e","Type":"ContainerStarted","Data":"db56b73299c07fb0e5fdee2df14af6cf4ec1664a055d4abc13ed6a36a1cbc8b7"} Nov 24 10:15:11 crc kubenswrapper[4719]: I1124 10:15:11.517049 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkqb2/must-gather-g872x" event={"ID":"9572f9fd-5e52-4924-87c5-b85c9c81fc2e","Type":"ContainerStarted","Data":"a2dc5aaf826f75d1c69f76d9eeadd84f9685a05cab8eeb513c2a6a72a0237af3"} Nov 24 10:15:11 crc kubenswrapper[4719]: I1124 10:15:11.533374 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vkqb2/must-gather-g872x" podStartSLOduration=2.533341712 podStartE2EDuration="2.533341712s" podCreationTimestamp="2025-11-24 10:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 10:15:11.531412907 +0000 UTC m=+4887.862686159" watchObservedRunningTime="2025-11-24 10:15:11.533341712 +0000 UTC m=+4887.864614964" Nov 24 10:15:14 crc kubenswrapper[4719]: I1124 10:15:14.172172 4719 scope.go:117] "RemoveContainer" containerID="1dda1bd28a2a67b5781a65df160f6af79fe29b71515a8044cd091aa60bc3569a" Nov 24 10:15:16 crc kubenswrapper[4719]: I1124 10:15:16.691005 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vkqb2/crc-debug-8gzfc"] Nov 24 10:15:16 crc kubenswrapper[4719]: I1124 10:15:16.693028 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/crc-debug-8gzfc" Nov 24 10:15:16 crc kubenswrapper[4719]: I1124 10:15:16.698955 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vkqb2"/"default-dockercfg-wpc8z" Nov 24 10:15:16 crc kubenswrapper[4719]: I1124 10:15:16.734081 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df06efe7-18a5-4fba-8781-361ecc97bd94-host\") pod \"crc-debug-8gzfc\" (UID: \"df06efe7-18a5-4fba-8781-361ecc97bd94\") " pod="openshift-must-gather-vkqb2/crc-debug-8gzfc" Nov 24 10:15:16 crc kubenswrapper[4719]: I1124 10:15:16.734214 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc8c9\" (UniqueName: \"kubernetes.io/projected/df06efe7-18a5-4fba-8781-361ecc97bd94-kube-api-access-pc8c9\") pod \"crc-debug-8gzfc\" (UID: \"df06efe7-18a5-4fba-8781-361ecc97bd94\") " pod="openshift-must-gather-vkqb2/crc-debug-8gzfc" Nov 24 10:15:16 crc kubenswrapper[4719]: I1124 10:15:16.836270 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc8c9\" (UniqueName: \"kubernetes.io/projected/df06efe7-18a5-4fba-8781-361ecc97bd94-kube-api-access-pc8c9\") pod \"crc-debug-8gzfc\" (UID: \"df06efe7-18a5-4fba-8781-361ecc97bd94\") " pod="openshift-must-gather-vkqb2/crc-debug-8gzfc" Nov 24 10:15:16 crc kubenswrapper[4719]: I1124 10:15:16.836387 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df06efe7-18a5-4fba-8781-361ecc97bd94-host\") pod \"crc-debug-8gzfc\" (UID: \"df06efe7-18a5-4fba-8781-361ecc97bd94\") " pod="openshift-must-gather-vkqb2/crc-debug-8gzfc" Nov 24 10:15:16 crc kubenswrapper[4719]: I1124 10:15:16.836553 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df06efe7-18a5-4fba-8781-361ecc97bd94-host\") pod \"crc-debug-8gzfc\" (UID: \"df06efe7-18a5-4fba-8781-361ecc97bd94\") " pod="openshift-must-gather-vkqb2/crc-debug-8gzfc" Nov 24 10:15:16 crc kubenswrapper[4719]: I1124 10:15:16.863480 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc8c9\" (UniqueName: \"kubernetes.io/projected/df06efe7-18a5-4fba-8781-361ecc97bd94-kube-api-access-pc8c9\") pod \"crc-debug-8gzfc\" (UID: \"df06efe7-18a5-4fba-8781-361ecc97bd94\") " pod="openshift-must-gather-vkqb2/crc-debug-8gzfc" Nov 24 10:15:17 crc kubenswrapper[4719]: I1124 10:15:17.017299 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/crc-debug-8gzfc" Nov 24 10:15:17 crc kubenswrapper[4719]: W1124 10:15:17.474220 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf06efe7_18a5_4fba_8781_361ecc97bd94.slice/crio-ef8f6dc308d5cde58885b81aa4959a23d90b8f1d4379692af85cd513b4a16fae WatchSource:0}: Error finding container ef8f6dc308d5cde58885b81aa4959a23d90b8f1d4379692af85cd513b4a16fae: Status 404 returned error can't find the container with id ef8f6dc308d5cde58885b81aa4959a23d90b8f1d4379692af85cd513b4a16fae Nov 24 10:15:17 crc kubenswrapper[4719]: I1124 10:15:17.588087 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkqb2/crc-debug-8gzfc" event={"ID":"df06efe7-18a5-4fba-8781-361ecc97bd94","Type":"ContainerStarted","Data":"ef8f6dc308d5cde58885b81aa4959a23d90b8f1d4379692af85cd513b4a16fae"} Nov 24 10:15:18 crc kubenswrapper[4719]: I1124 10:15:18.598699 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkqb2/crc-debug-8gzfc" event={"ID":"df06efe7-18a5-4fba-8781-361ecc97bd94","Type":"ContainerStarted","Data":"2b5e11db80420cf0bbcba39345d5ededa3749e7046ebe1afc13aef20b6a345c6"} Nov 24 10:15:18 crc kubenswrapper[4719]: I1124 10:15:18.618902 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vkqb2/crc-debug-8gzfc" podStartSLOduration=2.6188808420000003 podStartE2EDuration="2.618880842s" podCreationTimestamp="2025-11-24 10:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 10:15:18.614686492 +0000 UTC m=+4894.945959764" watchObservedRunningTime="2025-11-24 10:15:18.618880842 +0000 UTC m=+4894.950154094" Nov 24 10:15:34 crc kubenswrapper[4719]: I1124 10:15:34.562538 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:15:34 crc kubenswrapper[4719]: I1124 10:15:34.563003 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:15:54 crc kubenswrapper[4719]: I1124 10:15:54.881241 4719 generic.go:334] "Generic (PLEG): container finished" podID="df06efe7-18a5-4fba-8781-361ecc97bd94" containerID="2b5e11db80420cf0bbcba39345d5ededa3749e7046ebe1afc13aef20b6a345c6" exitCode=0 Nov 24 10:15:54 crc kubenswrapper[4719]: I1124 10:15:54.882399 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkqb2/crc-debug-8gzfc" event={"ID":"df06efe7-18a5-4fba-8781-361ecc97bd94","Type":"ContainerDied","Data":"2b5e11db80420cf0bbcba39345d5ededa3749e7046ebe1afc13aef20b6a345c6"} Nov 24 10:15:56 crc kubenswrapper[4719]: I1124 10:15:56.126464 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/crc-debug-8gzfc" Nov 24 10:15:56 crc kubenswrapper[4719]: I1124 10:15:56.179663 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vkqb2/crc-debug-8gzfc"] Nov 24 10:15:56 crc kubenswrapper[4719]: I1124 10:15:56.198239 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vkqb2/crc-debug-8gzfc"] Nov 24 10:15:56 crc kubenswrapper[4719]: I1124 10:15:56.276897 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc8c9\" (UniqueName: \"kubernetes.io/projected/df06efe7-18a5-4fba-8781-361ecc97bd94-kube-api-access-pc8c9\") pod \"df06efe7-18a5-4fba-8781-361ecc97bd94\" (UID: \"df06efe7-18a5-4fba-8781-361ecc97bd94\") " Nov 24 10:15:56 crc kubenswrapper[4719]: I1124 10:15:56.277935 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df06efe7-18a5-4fba-8781-361ecc97bd94-host\") pod \"df06efe7-18a5-4fba-8781-361ecc97bd94\" (UID: \"df06efe7-18a5-4fba-8781-361ecc97bd94\") " Nov 24 10:15:56 crc kubenswrapper[4719]: I1124 10:15:56.278586 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df06efe7-18a5-4fba-8781-361ecc97bd94-host" (OuterVolumeSpecName: "host") pod "df06efe7-18a5-4fba-8781-361ecc97bd94" (UID: "df06efe7-18a5-4fba-8781-361ecc97bd94"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 10:15:56 crc kubenswrapper[4719]: I1124 10:15:56.285248 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df06efe7-18a5-4fba-8781-361ecc97bd94-kube-api-access-pc8c9" (OuterVolumeSpecName: "kube-api-access-pc8c9") pod "df06efe7-18a5-4fba-8781-361ecc97bd94" (UID: "df06efe7-18a5-4fba-8781-361ecc97bd94"). InnerVolumeSpecName "kube-api-access-pc8c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:15:56 crc kubenswrapper[4719]: I1124 10:15:56.381412 4719 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/df06efe7-18a5-4fba-8781-361ecc97bd94-host\") on node \"crc\" DevicePath \"\"" Nov 24 10:15:56 crc kubenswrapper[4719]: I1124 10:15:56.381723 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc8c9\" (UniqueName: \"kubernetes.io/projected/df06efe7-18a5-4fba-8781-361ecc97bd94-kube-api-access-pc8c9\") on node \"crc\" DevicePath \"\"" Nov 24 10:15:56 crc kubenswrapper[4719]: I1124 10:15:56.532835 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df06efe7-18a5-4fba-8781-361ecc97bd94" path="/var/lib/kubelet/pods/df06efe7-18a5-4fba-8781-361ecc97bd94/volumes" Nov 24 10:15:56 crc kubenswrapper[4719]: I1124 10:15:56.969026 4719 scope.go:117] "RemoveContainer" containerID="2b5e11db80420cf0bbcba39345d5ededa3749e7046ebe1afc13aef20b6a345c6" Nov 24 10:15:56 crc kubenswrapper[4719]: I1124 10:15:56.969254 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/crc-debug-8gzfc" Nov 24 10:15:57 crc kubenswrapper[4719]: I1124 10:15:57.531957 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vkqb2/crc-debug-nghx2"] Nov 24 10:15:57 crc kubenswrapper[4719]: E1124 10:15:57.532636 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df06efe7-18a5-4fba-8781-361ecc97bd94" containerName="container-00" Nov 24 10:15:57 crc kubenswrapper[4719]: I1124 10:15:57.532648 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="df06efe7-18a5-4fba-8781-361ecc97bd94" containerName="container-00" Nov 24 10:15:57 crc kubenswrapper[4719]: I1124 10:15:57.532822 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="df06efe7-18a5-4fba-8781-361ecc97bd94" containerName="container-00" Nov 24 10:15:57 crc kubenswrapper[4719]: I1124 10:15:57.533446 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/crc-debug-nghx2" Nov 24 10:15:57 crc kubenswrapper[4719]: I1124 10:15:57.536899 4719 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vkqb2"/"default-dockercfg-wpc8z" Nov 24 10:15:57 crc kubenswrapper[4719]: I1124 10:15:57.604644 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552-host\") pod \"crc-debug-nghx2\" (UID: \"5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552\") " pod="openshift-must-gather-vkqb2/crc-debug-nghx2" Nov 24 10:15:57 crc kubenswrapper[4719]: I1124 10:15:57.604952 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvxz9\" (UniqueName: \"kubernetes.io/projected/5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552-kube-api-access-wvxz9\") pod \"crc-debug-nghx2\" (UID: \"5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552\") " pod="openshift-must-gather-vkqb2/crc-debug-nghx2" Nov 24 10:15:57 crc kubenswrapper[4719]: I1124 10:15:57.707471 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552-host\") pod \"crc-debug-nghx2\" (UID: \"5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552\") " pod="openshift-must-gather-vkqb2/crc-debug-nghx2" Nov 24 10:15:57 crc kubenswrapper[4719]: I1124 10:15:57.707568 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvxz9\" (UniqueName: \"kubernetes.io/projected/5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552-kube-api-access-wvxz9\") pod \"crc-debug-nghx2\" (UID: \"5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552\") " pod="openshift-must-gather-vkqb2/crc-debug-nghx2" Nov 24 10:15:57 crc kubenswrapper[4719]: I1124 10:15:57.707869 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552-host\") pod \"crc-debug-nghx2\" (UID: \"5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552\") " pod="openshift-must-gather-vkqb2/crc-debug-nghx2" Nov 24 10:15:57 crc kubenswrapper[4719]: I1124 10:15:57.726084 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvxz9\" (UniqueName: \"kubernetes.io/projected/5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552-kube-api-access-wvxz9\") pod \"crc-debug-nghx2\" (UID: \"5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552\") " pod="openshift-must-gather-vkqb2/crc-debug-nghx2" Nov 24 10:15:57 crc kubenswrapper[4719]: I1124 10:15:57.853070 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/crc-debug-nghx2" Nov 24 10:15:57 crc kubenswrapper[4719]: I1124 10:15:57.984423 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkqb2/crc-debug-nghx2" event={"ID":"5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552","Type":"ContainerStarted","Data":"7ca491b181556867af72ef791604da50b93d6fe18583377776a185c8fe1fde79"} Nov 24 10:15:58 crc kubenswrapper[4719]: I1124 10:15:58.994262 4719 generic.go:334] "Generic (PLEG): container finished" podID="5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552" containerID="53efce59d86dfed41d78ea8e600ff319341b46770c7fa2e962f990b6aaacc54a" exitCode=0 Nov 24 10:15:58 crc kubenswrapper[4719]: I1124 10:15:58.994571 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkqb2/crc-debug-nghx2" event={"ID":"5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552","Type":"ContainerDied","Data":"53efce59d86dfed41d78ea8e600ff319341b46770c7fa2e962f990b6aaacc54a"} Nov 24 10:15:59 crc kubenswrapper[4719]: I1124 10:15:59.320644 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vkqb2/crc-debug-nghx2"] Nov 24 10:15:59 crc kubenswrapper[4719]: I1124 10:15:59.335097 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vkqb2/crc-debug-nghx2"] Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.142696 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/crc-debug-nghx2" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.250807 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552-host\") pod \"5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552\" (UID: \"5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552\") " Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.250878 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvxz9\" (UniqueName: \"kubernetes.io/projected/5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552-kube-api-access-wvxz9\") pod \"5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552\" (UID: \"5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552\") " Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.250949 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552-host" (OuterVolumeSpecName: "host") pod "5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552" (UID: "5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.251331 4719 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552-host\") on node \"crc\" DevicePath \"\"" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.256294 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552-kube-api-access-wvxz9" (OuterVolumeSpecName: "kube-api-access-wvxz9") pod "5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552" (UID: "5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552"). InnerVolumeSpecName "kube-api-access-wvxz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.353105 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvxz9\" (UniqueName: \"kubernetes.io/projected/5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552-kube-api-access-wvxz9\") on node \"crc\" DevicePath \"\"" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.532467 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552" path="/var/lib/kubelet/pods/5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552/volumes" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.605801 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vkqb2/crc-debug-jzjdh"] Nov 24 10:16:00 crc kubenswrapper[4719]: E1124 10:16:00.606350 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552" containerName="container-00" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.606368 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552" containerName="container-00" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.606628 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ee0c4c1-eb7c-43fd-b3dd-f94e1864f552" containerName="container-00" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.607654 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/crc-debug-jzjdh" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.660117 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/392b2164-e4b4-4bec-a1ed-4aa99a045d71-host\") pod \"crc-debug-jzjdh\" (UID: \"392b2164-e4b4-4bec-a1ed-4aa99a045d71\") " pod="openshift-must-gather-vkqb2/crc-debug-jzjdh" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.660449 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpzvb\" (UniqueName: \"kubernetes.io/projected/392b2164-e4b4-4bec-a1ed-4aa99a045d71-kube-api-access-tpzvb\") pod \"crc-debug-jzjdh\" (UID: \"392b2164-e4b4-4bec-a1ed-4aa99a045d71\") " pod="openshift-must-gather-vkqb2/crc-debug-jzjdh" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.768505 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpzvb\" (UniqueName: \"kubernetes.io/projected/392b2164-e4b4-4bec-a1ed-4aa99a045d71-kube-api-access-tpzvb\") pod \"crc-debug-jzjdh\" (UID: \"392b2164-e4b4-4bec-a1ed-4aa99a045d71\") " pod="openshift-must-gather-vkqb2/crc-debug-jzjdh" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.768705 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/392b2164-e4b4-4bec-a1ed-4aa99a045d71-host\") pod \"crc-debug-jzjdh\" (UID: \"392b2164-e4b4-4bec-a1ed-4aa99a045d71\") " pod="openshift-must-gather-vkqb2/crc-debug-jzjdh" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.768787 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/392b2164-e4b4-4bec-a1ed-4aa99a045d71-host\") pod \"crc-debug-jzjdh\" (UID: \"392b2164-e4b4-4bec-a1ed-4aa99a045d71\") " pod="openshift-must-gather-vkqb2/crc-debug-jzjdh" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.791309 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpzvb\" (UniqueName: \"kubernetes.io/projected/392b2164-e4b4-4bec-a1ed-4aa99a045d71-kube-api-access-tpzvb\") pod \"crc-debug-jzjdh\" (UID: \"392b2164-e4b4-4bec-a1ed-4aa99a045d71\") " pod="openshift-must-gather-vkqb2/crc-debug-jzjdh" Nov 24 10:16:00 crc kubenswrapper[4719]: I1124 10:16:00.928593 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/crc-debug-jzjdh" Nov 24 10:16:00 crc kubenswrapper[4719]: W1124 10:16:00.972746 4719 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod392b2164_e4b4_4bec_a1ed_4aa99a045d71.slice/crio-a548984e80bd219dcce54aae0426245a9d38ab6493de06ee95d3afcf73c3d6d7 WatchSource:0}: Error finding container a548984e80bd219dcce54aae0426245a9d38ab6493de06ee95d3afcf73c3d6d7: Status 404 returned error can't find the container with id a548984e80bd219dcce54aae0426245a9d38ab6493de06ee95d3afcf73c3d6d7 Nov 24 10:16:01 crc kubenswrapper[4719]: I1124 10:16:01.018475 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkqb2/crc-debug-jzjdh" event={"ID":"392b2164-e4b4-4bec-a1ed-4aa99a045d71","Type":"ContainerStarted","Data":"a548984e80bd219dcce54aae0426245a9d38ab6493de06ee95d3afcf73c3d6d7"} Nov 24 10:16:01 crc kubenswrapper[4719]: I1124 10:16:01.022600 4719 scope.go:117] "RemoveContainer" containerID="53efce59d86dfed41d78ea8e600ff319341b46770c7fa2e962f990b6aaacc54a" Nov 24 10:16:01 crc kubenswrapper[4719]: I1124 10:16:01.022783 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/crc-debug-nghx2" Nov 24 10:16:02 crc kubenswrapper[4719]: I1124 10:16:02.034591 4719 generic.go:334] "Generic (PLEG): container finished" podID="392b2164-e4b4-4bec-a1ed-4aa99a045d71" containerID="c4dc107a66c20eef9e22e8483db54a46148196cb93b446a08dfa034fb67b7cdc" exitCode=0 Nov 24 10:16:02 crc kubenswrapper[4719]: I1124 10:16:02.034999 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkqb2/crc-debug-jzjdh" event={"ID":"392b2164-e4b4-4bec-a1ed-4aa99a045d71","Type":"ContainerDied","Data":"c4dc107a66c20eef9e22e8483db54a46148196cb93b446a08dfa034fb67b7cdc"} Nov 24 10:16:02 crc kubenswrapper[4719]: I1124 10:16:02.075168 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vkqb2/crc-debug-jzjdh"] Nov 24 10:16:02 crc kubenswrapper[4719]: I1124 10:16:02.085964 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vkqb2/crc-debug-jzjdh"] Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.127056 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6d4bf84b58-m828t" podUID="714fe5a8-a778-4366-8823-868dd1210515" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.127029 4719 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-b4c496f69-bks8t" podUID="7cfebe98-a194-4c28-861f-a80f9f9f22de" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.408872 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/crc-debug-jzjdh" Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.486524 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpzvb\" (UniqueName: \"kubernetes.io/projected/392b2164-e4b4-4bec-a1ed-4aa99a045d71-kube-api-access-tpzvb\") pod \"392b2164-e4b4-4bec-a1ed-4aa99a045d71\" (UID: \"392b2164-e4b4-4bec-a1ed-4aa99a045d71\") " Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.486619 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/392b2164-e4b4-4bec-a1ed-4aa99a045d71-host\") pod \"392b2164-e4b4-4bec-a1ed-4aa99a045d71\" (UID: \"392b2164-e4b4-4bec-a1ed-4aa99a045d71\") " Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.487203 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/392b2164-e4b4-4bec-a1ed-4aa99a045d71-host" (OuterVolumeSpecName: "host") pod "392b2164-e4b4-4bec-a1ed-4aa99a045d71" (UID: "392b2164-e4b4-4bec-a1ed-4aa99a045d71"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.499814 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392b2164-e4b4-4bec-a1ed-4aa99a045d71-kube-api-access-tpzvb" (OuterVolumeSpecName: "kube-api-access-tpzvb") pod "392b2164-e4b4-4bec-a1ed-4aa99a045d71" (UID: "392b2164-e4b4-4bec-a1ed-4aa99a045d71"). InnerVolumeSpecName "kube-api-access-tpzvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.536388 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="392b2164-e4b4-4bec-a1ed-4aa99a045d71" path="/var/lib/kubelet/pods/392b2164-e4b4-4bec-a1ed-4aa99a045d71/volumes" Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.562051 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.562236 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.562330 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.563062 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c367fa3d99ee232632dd218f86db975241bce842b32aed9d95c60ebe991c37c"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.563189 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://7c367fa3d99ee232632dd218f86db975241bce842b32aed9d95c60ebe991c37c" gracePeriod=600 Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.589529 4719 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/392b2164-e4b4-4bec-a1ed-4aa99a045d71-host\") on node \"crc\" DevicePath \"\"" Nov 24 10:16:04 crc kubenswrapper[4719]: I1124 10:16:04.589703 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpzvb\" (UniqueName: \"kubernetes.io/projected/392b2164-e4b4-4bec-a1ed-4aa99a045d71-kube-api-access-tpzvb\") on node \"crc\" DevicePath \"\"" Nov 24 10:16:05 crc kubenswrapper[4719]: I1124 10:16:05.220882 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="7c367fa3d99ee232632dd218f86db975241bce842b32aed9d95c60ebe991c37c" exitCode=0 Nov 24 10:16:05 crc kubenswrapper[4719]: I1124 10:16:05.221153 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"7c367fa3d99ee232632dd218f86db975241bce842b32aed9d95c60ebe991c37c"} Nov 24 10:16:05 crc kubenswrapper[4719]: I1124 10:16:05.221240 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e"} Nov 24 10:16:05 crc kubenswrapper[4719]: I1124 10:16:05.221274 4719 scope.go:117] "RemoveContainer" containerID="dfb1becfe408c22a573a6cadefde09fe0257fde132727e7f012993cba27d72de" Nov 24 10:16:05 crc kubenswrapper[4719]: I1124 10:16:05.222947 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/crc-debug-jzjdh" Nov 24 10:16:05 crc kubenswrapper[4719]: I1124 10:16:05.258136 4719 scope.go:117] "RemoveContainer" containerID="c4dc107a66c20eef9e22e8483db54a46148196cb93b446a08dfa034fb67b7cdc" Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.184367 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wsf4d"] Nov 24 10:16:13 crc kubenswrapper[4719]: E1124 10:16:13.185499 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392b2164-e4b4-4bec-a1ed-4aa99a045d71" containerName="container-00" Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.185515 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="392b2164-e4b4-4bec-a1ed-4aa99a045d71" containerName="container-00" Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.185712 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="392b2164-e4b4-4bec-a1ed-4aa99a045d71" containerName="container-00" Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.187382 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.224927 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsf4d"] Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.364812 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-catalog-content\") pod \"redhat-marketplace-wsf4d\" (UID: \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\") " pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.365759 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4whn\" (UniqueName: \"kubernetes.io/projected/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-kube-api-access-v4whn\") pod \"redhat-marketplace-wsf4d\" (UID: \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\") " pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.365999 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-utilities\") pod \"redhat-marketplace-wsf4d\" (UID: \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\") " pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.467680 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4whn\" (UniqueName: \"kubernetes.io/projected/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-kube-api-access-v4whn\") pod \"redhat-marketplace-wsf4d\" (UID: \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\") " pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.468181 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-utilities\") pod \"redhat-marketplace-wsf4d\" (UID: \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\") " pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.468348 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-catalog-content\") pod \"redhat-marketplace-wsf4d\" (UID: \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\") " pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.468937 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-catalog-content\") pod \"redhat-marketplace-wsf4d\" (UID: \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\") " pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.469288 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-utilities\") pod \"redhat-marketplace-wsf4d\" (UID: \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\") " pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.488011 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4whn\" (UniqueName: \"kubernetes.io/projected/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-kube-api-access-v4whn\") pod \"redhat-marketplace-wsf4d\" (UID: \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\") " pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:13 crc kubenswrapper[4719]: I1124 10:16:13.505890 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:14 crc kubenswrapper[4719]: I1124 10:16:14.073821 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsf4d"] Nov 24 10:16:14 crc kubenswrapper[4719]: I1124 10:16:14.316300 4719 generic.go:334] "Generic (PLEG): container finished" podID="b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" containerID="ad15694a685b24d98464b5358856b7d31497a90742d7d7405917faf1411a0144" exitCode=0 Nov 24 10:16:14 crc kubenswrapper[4719]: I1124 10:16:14.316402 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsf4d" event={"ID":"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8","Type":"ContainerDied","Data":"ad15694a685b24d98464b5358856b7d31497a90742d7d7405917faf1411a0144"} Nov 24 10:16:14 crc kubenswrapper[4719]: I1124 10:16:14.316624 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsf4d" event={"ID":"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8","Type":"ContainerStarted","Data":"0eb7e6e2cbd81bc0e4466ee444f46c07ef5813cbf49651dc1ed5b2d616eeb9df"} Nov 24 10:16:15 crc kubenswrapper[4719]: I1124 10:16:15.325480 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsf4d" event={"ID":"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8","Type":"ContainerStarted","Data":"0ef4822c7c58915070852bfcf138f0ba4427964ab24dbc18dc16de787ccc0253"} Nov 24 10:16:18 crc kubenswrapper[4719]: I1124 10:16:18.352741 4719 generic.go:334] "Generic (PLEG): container finished" podID="b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" containerID="0ef4822c7c58915070852bfcf138f0ba4427964ab24dbc18dc16de787ccc0253" exitCode=0 Nov 24 10:16:18 crc kubenswrapper[4719]: I1124 10:16:18.352806 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsf4d" event={"ID":"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8","Type":"ContainerDied","Data":"0ef4822c7c58915070852bfcf138f0ba4427964ab24dbc18dc16de787ccc0253"} Nov 24 10:16:20 crc kubenswrapper[4719]: I1124 10:16:20.371023 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsf4d" event={"ID":"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8","Type":"ContainerStarted","Data":"518af0ec0bd4d4c7ed746b2b61ebdde493bc65e659abefa382fa0ae6a312797d"} Nov 24 10:16:20 crc kubenswrapper[4719]: I1124 10:16:20.394160 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wsf4d" podStartSLOduration=2.8843831079999998 podStartE2EDuration="7.39414534s" podCreationTimestamp="2025-11-24 10:16:13 +0000 UTC" firstStartedPulling="2025-11-24 10:16:14.317896737 +0000 UTC m=+4950.649169989" lastFinishedPulling="2025-11-24 10:16:18.827658969 +0000 UTC m=+4955.158932221" observedRunningTime="2025-11-24 10:16:20.388874569 +0000 UTC m=+4956.720147831" watchObservedRunningTime="2025-11-24 10:16:20.39414534 +0000 UTC m=+4956.725418592" Nov 24 10:16:23 crc kubenswrapper[4719]: I1124 10:16:23.506805 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:23 crc kubenswrapper[4719]: I1124 10:16:23.507262 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:23 crc kubenswrapper[4719]: I1124 10:16:23.565493 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:24 crc kubenswrapper[4719]: I1124 10:16:24.453177 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:24 crc kubenswrapper[4719]: I1124 10:16:24.514803 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsf4d"] Nov 24 10:16:26 crc kubenswrapper[4719]: I1124 10:16:26.415391 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wsf4d" podUID="b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" containerName="registry-server" containerID="cri-o://518af0ec0bd4d4c7ed746b2b61ebdde493bc65e659abefa382fa0ae6a312797d" gracePeriod=2 Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.002233 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.106077 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-utilities\") pod \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\" (UID: \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\") " Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.106137 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4whn\" (UniqueName: \"kubernetes.io/projected/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-kube-api-access-v4whn\") pod \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\" (UID: \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\") " Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.106486 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-catalog-content\") pod \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\" (UID: \"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8\") " Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.106957 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-utilities" (OuterVolumeSpecName: "utilities") pod "b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" (UID: "b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.107654 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.116102 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-kube-api-access-v4whn" (OuterVolumeSpecName: "kube-api-access-v4whn") pod "b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" (UID: "b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8"). InnerVolumeSpecName "kube-api-access-v4whn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.135271 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" (UID: "b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.209423 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4whn\" (UniqueName: \"kubernetes.io/projected/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-kube-api-access-v4whn\") on node \"crc\" DevicePath \"\"" Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.209470 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.427346 4719 generic.go:334] "Generic (PLEG): container finished" podID="b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" containerID="518af0ec0bd4d4c7ed746b2b61ebdde493bc65e659abefa382fa0ae6a312797d" exitCode=0 Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.427385 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsf4d" event={"ID":"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8","Type":"ContainerDied","Data":"518af0ec0bd4d4c7ed746b2b61ebdde493bc65e659abefa382fa0ae6a312797d"} Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.427412 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsf4d" event={"ID":"b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8","Type":"ContainerDied","Data":"0eb7e6e2cbd81bc0e4466ee444f46c07ef5813cbf49651dc1ed5b2d616eeb9df"} Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.427428 4719 scope.go:117] "RemoveContainer" containerID="518af0ec0bd4d4c7ed746b2b61ebdde493bc65e659abefa382fa0ae6a312797d" Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.427566 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsf4d" Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.471887 4719 scope.go:117] "RemoveContainer" containerID="0ef4822c7c58915070852bfcf138f0ba4427964ab24dbc18dc16de787ccc0253" Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.472627 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsf4d"] Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.484281 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsf4d"] Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.926713 4719 scope.go:117] "RemoveContainer" containerID="ad15694a685b24d98464b5358856b7d31497a90742d7d7405917faf1411a0144" Nov 24 10:16:27 crc kubenswrapper[4719]: I1124 10:16:27.995606 4719 scope.go:117] "RemoveContainer" containerID="518af0ec0bd4d4c7ed746b2b61ebdde493bc65e659abefa382fa0ae6a312797d" Nov 24 10:16:28 crc kubenswrapper[4719]: E1124 10:16:28.000420 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"518af0ec0bd4d4c7ed746b2b61ebdde493bc65e659abefa382fa0ae6a312797d\": container with ID starting with 518af0ec0bd4d4c7ed746b2b61ebdde493bc65e659abefa382fa0ae6a312797d not found: ID does not exist" containerID="518af0ec0bd4d4c7ed746b2b61ebdde493bc65e659abefa382fa0ae6a312797d" Nov 24 10:16:28 crc kubenswrapper[4719]: I1124 10:16:28.000469 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"518af0ec0bd4d4c7ed746b2b61ebdde493bc65e659abefa382fa0ae6a312797d"} err="failed to get container status \"518af0ec0bd4d4c7ed746b2b61ebdde493bc65e659abefa382fa0ae6a312797d\": rpc error: code = NotFound desc = could not find container \"518af0ec0bd4d4c7ed746b2b61ebdde493bc65e659abefa382fa0ae6a312797d\": container with ID starting with 518af0ec0bd4d4c7ed746b2b61ebdde493bc65e659abefa382fa0ae6a312797d not found: ID does not exist" Nov 24 10:16:28 crc kubenswrapper[4719]: I1124 10:16:28.000499 4719 scope.go:117] "RemoveContainer" containerID="0ef4822c7c58915070852bfcf138f0ba4427964ab24dbc18dc16de787ccc0253" Nov 24 10:16:28 crc kubenswrapper[4719]: E1124 10:16:28.001808 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ef4822c7c58915070852bfcf138f0ba4427964ab24dbc18dc16de787ccc0253\": container with ID starting with 0ef4822c7c58915070852bfcf138f0ba4427964ab24dbc18dc16de787ccc0253 not found: ID does not exist" containerID="0ef4822c7c58915070852bfcf138f0ba4427964ab24dbc18dc16de787ccc0253" Nov 24 10:16:28 crc kubenswrapper[4719]: I1124 10:16:28.001837 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef4822c7c58915070852bfcf138f0ba4427964ab24dbc18dc16de787ccc0253"} err="failed to get container status \"0ef4822c7c58915070852bfcf138f0ba4427964ab24dbc18dc16de787ccc0253\": rpc error: code = NotFound desc = could not find container \"0ef4822c7c58915070852bfcf138f0ba4427964ab24dbc18dc16de787ccc0253\": container with ID starting with 0ef4822c7c58915070852bfcf138f0ba4427964ab24dbc18dc16de787ccc0253 not found: ID does not exist" Nov 24 10:16:28 crc kubenswrapper[4719]: I1124 10:16:28.001857 4719 scope.go:117] "RemoveContainer" containerID="ad15694a685b24d98464b5358856b7d31497a90742d7d7405917faf1411a0144" Nov 24 10:16:28 crc kubenswrapper[4719]: E1124 10:16:28.002154 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad15694a685b24d98464b5358856b7d31497a90742d7d7405917faf1411a0144\": container with ID starting with ad15694a685b24d98464b5358856b7d31497a90742d7d7405917faf1411a0144 not found: ID does not exist" containerID="ad15694a685b24d98464b5358856b7d31497a90742d7d7405917faf1411a0144" Nov 24 10:16:28 crc kubenswrapper[4719]: I1124 10:16:28.002180 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad15694a685b24d98464b5358856b7d31497a90742d7d7405917faf1411a0144"} err="failed to get container status \"ad15694a685b24d98464b5358856b7d31497a90742d7d7405917faf1411a0144\": rpc error: code = NotFound desc = could not find container \"ad15694a685b24d98464b5358856b7d31497a90742d7d7405917faf1411a0144\": container with ID starting with ad15694a685b24d98464b5358856b7d31497a90742d7d7405917faf1411a0144 not found: ID does not exist" Nov 24 10:16:28 crc kubenswrapper[4719]: I1124 10:16:28.532743 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" path="/var/lib/kubelet/pods/b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8/volumes" Nov 24 10:17:19 crc kubenswrapper[4719]: I1124 10:17:19.819340 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c84b4b586-mwtc8_390c94ff-225b-448b-963d-9b8cb729963a/barbican-api/0.log" Nov 24 10:17:20 crc kubenswrapper[4719]: I1124 10:17:20.036644 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c84b4b586-mwtc8_390c94ff-225b-448b-963d-9b8cb729963a/barbican-api-log/0.log" Nov 24 10:17:20 crc kubenswrapper[4719]: I1124 10:17:20.161912 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-68fd59f556-bvd2x_6feeb8da-45f5-4eb9-bae3-5101afc7e021/barbican-keystone-listener/0.log" Nov 24 10:17:20 crc kubenswrapper[4719]: I1124 10:17:20.200537 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-68fd59f556-bvd2x_6feeb8da-45f5-4eb9-bae3-5101afc7e021/barbican-keystone-listener-log/0.log" Nov 24 10:17:20 crc kubenswrapper[4719]: I1124 10:17:20.394264 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55fc6d8c7-9576d_9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb/barbican-worker-log/0.log" Nov 24 10:17:20 crc kubenswrapper[4719]: I1124 10:17:20.420079 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55fc6d8c7-9576d_9c9458ba-d5e7-4232-bb1b-e63ddc8aaecb/barbican-worker/0.log" Nov 24 10:17:20 crc kubenswrapper[4719]: I1124 10:17:20.622477 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-tmdpx_2825c32a-3ceb-4ba8-a522-554244ca93dd/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:20 crc kubenswrapper[4719]: I1124 10:17:20.718889 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd478071-4e9d-402f-afa7-fbd28f489095/ceilometer-central-agent/0.log" Nov 24 10:17:20 crc kubenswrapper[4719]: I1124 10:17:20.794374 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd478071-4e9d-402f-afa7-fbd28f489095/ceilometer-notification-agent/0.log" Nov 24 10:17:20 crc kubenswrapper[4719]: I1124 10:17:20.892306 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd478071-4e9d-402f-afa7-fbd28f489095/proxy-httpd/0.log" Nov 24 10:17:20 crc kubenswrapper[4719]: I1124 10:17:20.963108 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dd478071-4e9d-402f-afa7-fbd28f489095/sg-core/0.log" Nov 24 10:17:21 crc kubenswrapper[4719]: I1124 10:17:21.126643 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-xnzpr_1dad4f07-729f-4a99-bc32-62f666007c12/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:21 crc kubenswrapper[4719]: I1124 10:17:21.209118 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qvlzq_6d07d001-6f91-4b09-9897-01f55286e015/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:21 crc kubenswrapper[4719]: I1124 10:17:21.379935 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ee147176-e4d4-4f7c-a73b-aa861bc83f31/cinder-api/0.log" Nov 24 10:17:21 crc kubenswrapper[4719]: I1124 10:17:21.426196 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ee147176-e4d4-4f7c-a73b-aa861bc83f31/cinder-api-log/0.log" Nov 24 10:17:21 crc kubenswrapper[4719]: I1124 10:17:21.602799 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_9d9e3bfc-9c58-4534-89f9-72f35c264a80/probe/0.log" Nov 24 10:17:21 crc kubenswrapper[4719]: I1124 10:17:21.739403 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_9d9e3bfc-9c58-4534-89f9-72f35c264a80/cinder-backup/0.log" Nov 24 10:17:21 crc kubenswrapper[4719]: I1124 10:17:21.824176 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_44ceda2d-a4e3-4606-be8b-fa3806e4be38/cinder-scheduler/0.log" Nov 24 10:17:21 crc kubenswrapper[4719]: I1124 10:17:21.901750 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_44ceda2d-a4e3-4606-be8b-fa3806e4be38/probe/0.log" Nov 24 10:17:22 crc kubenswrapper[4719]: I1124 10:17:22.547694 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_82bfb246-8a64-46b7-9223-f2158b114186/probe/0.log" Nov 24 10:17:22 crc kubenswrapper[4719]: I1124 10:17:22.574732 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_82bfb246-8a64-46b7-9223-f2158b114186/cinder-volume/0.log" Nov 24 10:17:22 crc kubenswrapper[4719]: I1124 10:17:22.635187 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-fbw9l_70b5dfb2-d163-4188-989e-e1f2a9d84026/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:22 crc kubenswrapper[4719]: I1124 10:17:22.839290 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-82qn6_9ebf3aed-eec5-4676-9f83-23ea070aa92e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:22 crc kubenswrapper[4719]: I1124 10:17:22.927355 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c846ff5b9-x2fxq_643db723-7fbb-4c9e-a815-fcfbc4eab02c/init/0.log" Nov 24 10:17:23 crc kubenswrapper[4719]: I1124 10:17:23.115724 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c846ff5b9-x2fxq_643db723-7fbb-4c9e-a815-fcfbc4eab02c/init/0.log" Nov 24 10:17:23 crc kubenswrapper[4719]: I1124 10:17:23.190058 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0b2a5521-1fe8-40c7-af69-18332a312c14/glance-httpd/0.log" Nov 24 10:17:23 crc kubenswrapper[4719]: I1124 10:17:23.358460 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c846ff5b9-x2fxq_643db723-7fbb-4c9e-a815-fcfbc4eab02c/dnsmasq-dns/0.log" Nov 24 10:17:23 crc kubenswrapper[4719]: I1124 10:17:23.441683 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0b2a5521-1fe8-40c7-af69-18332a312c14/glance-log/0.log" Nov 24 10:17:23 crc kubenswrapper[4719]: I1124 10:17:23.516583 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e745f799-46a2-4fd7-b32d-09a11558070b/glance-httpd/0.log" Nov 24 10:17:23 crc kubenswrapper[4719]: I1124 10:17:23.631650 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e745f799-46a2-4fd7-b32d-09a11558070b/glance-log/0.log" Nov 24 10:17:24 crc kubenswrapper[4719]: I1124 10:17:24.246466 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5f6b7744d-ql24k_494049ce-0355-420c-9d3b-774f7befb12a/horizon/0.log" Nov 24 10:17:24 crc kubenswrapper[4719]: I1124 10:17:24.285749 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-2rf86_b1eec709-2c88-4a47-bc8b-51f49cc99053/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:24 crc kubenswrapper[4719]: I1124 10:17:24.370829 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5f6b7744d-ql24k_494049ce-0355-420c-9d3b-774f7befb12a/horizon-log/0.log" Nov 24 10:17:24 crc kubenswrapper[4719]: I1124 10:17:24.585185 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dvw2g_b7e3784d-ae59-4dce-9c51-429e2361ee3b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:24 crc kubenswrapper[4719]: I1124 10:17:24.761075 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-c66bd98b8-qwf7d_4bfe0fc6-5440-468a-9ad6-6f9f6171e639/keystone-api/0.log" Nov 24 10:17:24 crc kubenswrapper[4719]: I1124 10:17:24.810183 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29399641-rwbnf_6ff61f4c-fc69-4299-987e-1c9ca3e1c633/keystone-cron/0.log" Nov 24 10:17:24 crc kubenswrapper[4719]: I1124 10:17:24.987290 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_cc7de5f2-3f27-47e7-a08e-f3b13211531a/kube-state-metrics/0.log" Nov 24 10:17:25 crc kubenswrapper[4719]: I1124 10:17:25.012225 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-6bnmf_e45a8b91-3c8a-4471-852f-d648ddadcf6f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:25 crc kubenswrapper[4719]: I1124 10:17:25.273429 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_7db9d547-856d-42d1-a2b5-bdc02f69d938/manila-api/0.log" Nov 24 10:17:25 crc kubenswrapper[4719]: I1124 10:17:25.303949 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_7db9d547-856d-42d1-a2b5-bdc02f69d938/manila-api-log/0.log" Nov 24 10:17:25 crc kubenswrapper[4719]: I1124 10:17:25.347822 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_e101dc58-4d71-4456-aa34-e215690b34bf/probe/0.log" Nov 24 10:17:25 crc kubenswrapper[4719]: I1124 10:17:25.459695 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_e101dc58-4d71-4456-aa34-e215690b34bf/manila-scheduler/0.log" Nov 24 10:17:25 crc kubenswrapper[4719]: I1124 10:17:25.609268 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_880fcfd8-382a-4865-997b-203e11aad18d/manila-share/0.log" Nov 24 10:17:25 crc kubenswrapper[4719]: I1124 10:17:25.618964 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_880fcfd8-382a-4865-997b-203e11aad18d/probe/0.log" Nov 24 10:17:25 crc kubenswrapper[4719]: I1124 10:17:25.890433 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86d4855669-sjtqj_735cee72-40a1-4828-936f-9459f731b3da/neutron-httpd/0.log" Nov 24 10:17:26 crc kubenswrapper[4719]: I1124 10:17:26.129914 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86d4855669-sjtqj_735cee72-40a1-4828-936f-9459f731b3da/neutron-api/0.log" Nov 24 10:17:26 crc kubenswrapper[4719]: I1124 10:17:26.167378 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-hp9pm_4e1b3223-80c0-40c5-9f45-833af2ab03be/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:27 crc kubenswrapper[4719]: I1124 10:17:27.390198 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_7bb7c808-2485-4aba-acd2-2b509f4ed607/nova-cell0-conductor-conductor/0.log" Nov 24 10:17:27 crc kubenswrapper[4719]: I1124 10:17:27.390327 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_007b5bfc-1e0a-4468-87ae-5fae8c196871/nova-api-log/0.log" Nov 24 10:17:27 crc kubenswrapper[4719]: I1124 10:17:27.451167 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3c77db21-d39a-4c8d-bd9d-a4e4c3d37a3f/nova-cell1-conductor-conductor/0.log" Nov 24 10:17:27 crc kubenswrapper[4719]: I1124 10:17:27.460979 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_007b5bfc-1e0a-4468-87ae-5fae8c196871/nova-api-api/0.log" Nov 24 10:17:27 crc kubenswrapper[4719]: I1124 10:17:27.812667 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6229cd6f-c2de-47c4-9edf-99ebeddaf05b/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 10:17:27 crc kubenswrapper[4719]: I1124 10:17:27.827976 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-lxh45_c36f9bbf-22ba-458e-a531-081db1b99878/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:28 crc kubenswrapper[4719]: I1124 10:17:28.179045 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3facc49a-dd07-4db6-b353-a06ff01dc19c/nova-metadata-log/0.log" Nov 24 10:17:28 crc kubenswrapper[4719]: I1124 10:17:28.422559 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_98cf534d-3e13-4443-901c-0755d91b2f09/mysql-bootstrap/0.log" Nov 24 10:17:28 crc kubenswrapper[4719]: I1124 10:17:28.494300 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e543db5c-487f-4724-91aa-c3ea4cb33149/nova-scheduler-scheduler/0.log" Nov 24 10:17:28 crc kubenswrapper[4719]: I1124 10:17:28.653129 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_98cf534d-3e13-4443-901c-0755d91b2f09/mysql-bootstrap/0.log" Nov 24 10:17:28 crc kubenswrapper[4719]: I1124 10:17:28.700917 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_98cf534d-3e13-4443-901c-0755d91b2f09/galera/0.log" Nov 24 10:17:28 crc kubenswrapper[4719]: I1124 10:17:28.956709 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38/mysql-bootstrap/0.log" Nov 24 10:17:29 crc kubenswrapper[4719]: I1124 10:17:29.168516 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38/galera/0.log" Nov 24 10:17:29 crc kubenswrapper[4719]: I1124 10:17:29.268295 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0c27eb57-dd8a-4a0b-b1a4-f51d183e2c38/mysql-bootstrap/0.log" Nov 24 10:17:29 crc kubenswrapper[4719]: I1124 10:17:29.393271 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_38d62700-956d-4aa3-a239-ff6fb8068ded/openstackclient/0.log" Nov 24 10:17:29 crc kubenswrapper[4719]: I1124 10:17:29.634667 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ccf6d_225b57e5-7f49-4b51-87db-6c790f23bf6e/ovn-controller/0.log" Nov 24 10:17:29 crc kubenswrapper[4719]: I1124 10:17:29.945293 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xdb6r_7bc3fe26-9fdd-4077-b4e1-6f9a35219a21/openstack-network-exporter/0.log" Nov 24 10:17:29 crc kubenswrapper[4719]: I1124 10:17:29.972579 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bk9qz_d36ea9cd-a7ed-463f-9ef5-58066e1446ed/ovsdb-server-init/0.log" Nov 24 10:17:30 crc kubenswrapper[4719]: I1124 10:17:30.063674 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3facc49a-dd07-4db6-b353-a06ff01dc19c/nova-metadata-metadata/0.log" Nov 24 10:17:30 crc kubenswrapper[4719]: I1124 10:17:30.804132 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bk9qz_d36ea9cd-a7ed-463f-9ef5-58066e1446ed/ovsdb-server-init/0.log" Nov 24 10:17:30 crc kubenswrapper[4719]: I1124 10:17:30.833702 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bk9qz_d36ea9cd-a7ed-463f-9ef5-58066e1446ed/ovs-vswitchd/0.log" Nov 24 10:17:30 crc kubenswrapper[4719]: I1124 10:17:30.877104 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-bk9qz_d36ea9cd-a7ed-463f-9ef5-58066e1446ed/ovsdb-server/0.log" Nov 24 10:17:31 crc kubenswrapper[4719]: I1124 10:17:31.134487 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-6kl84_76df25ad-66c3-42d0-8539-b083731a87be/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:31 crc kubenswrapper[4719]: I1124 10:17:31.151374 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_73dcc2c6-9ccf-4682-bd39-3c439d4691a2/openstack-network-exporter/0.log" Nov 24 10:17:31 crc kubenswrapper[4719]: I1124 10:17:31.265377 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_73dcc2c6-9ccf-4682-bd39-3c439d4691a2/ovn-northd/0.log" Nov 24 10:17:31 crc kubenswrapper[4719]: I1124 10:17:31.387350 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_30c29a06-49fe-444c-befa-e10d67ac0e5e/openstack-network-exporter/0.log" Nov 24 10:17:31 crc kubenswrapper[4719]: I1124 10:17:31.529397 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_30c29a06-49fe-444c-befa-e10d67ac0e5e/ovsdbserver-nb/0.log" Nov 24 10:17:31 crc kubenswrapper[4719]: I1124 10:17:31.609471 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0be9bc93-deb3-4864-a259-dc32d2d64870/openstack-network-exporter/0.log" Nov 24 10:17:31 crc kubenswrapper[4719]: I1124 10:17:31.787311 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0be9bc93-deb3-4864-a259-dc32d2d64870/ovsdbserver-sb/0.log" Nov 24 10:17:31 crc kubenswrapper[4719]: I1124 10:17:31.912396 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5478d99856-2md7b_d70d9227-aa5e-4855-b4de-8bb688c24f34/placement-api/0.log" Nov 24 10:17:32 crc kubenswrapper[4719]: I1124 10:17:32.422562 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cdc73497-dc8e-44ef-b146-be6598f87eec/setup-container/0.log" Nov 24 10:17:32 crc kubenswrapper[4719]: I1124 10:17:32.458965 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5478d99856-2md7b_d70d9227-aa5e-4855-b4de-8bb688c24f34/placement-log/0.log" Nov 24 10:17:32 crc kubenswrapper[4719]: I1124 10:17:32.603490 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cdc73497-dc8e-44ef-b146-be6598f87eec/setup-container/0.log" Nov 24 10:17:32 crc kubenswrapper[4719]: I1124 10:17:32.666268 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cdc73497-dc8e-44ef-b146-be6598f87eec/rabbitmq/0.log" Nov 24 10:17:32 crc kubenswrapper[4719]: I1124 10:17:32.809778 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_576b0826-aefe-4ef2-b0f8-77e8d7811a29/setup-container/0.log" Nov 24 10:17:33 crc kubenswrapper[4719]: I1124 10:17:33.056112 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_576b0826-aefe-4ef2-b0f8-77e8d7811a29/setup-container/0.log" Nov 24 10:17:33 crc kubenswrapper[4719]: I1124 10:17:33.111015 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vb784_6aca06db-5628-433e-a1f4-f603fa8ece51/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:33 crc kubenswrapper[4719]: I1124 10:17:33.140961 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_576b0826-aefe-4ef2-b0f8-77e8d7811a29/rabbitmq/0.log" Nov 24 10:17:33 crc kubenswrapper[4719]: I1124 10:17:33.437488 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5gn82_63fc12c2-52bf-43d5-8abb-5ddf94dfdb4f/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:33 crc kubenswrapper[4719]: I1124 10:17:33.503833 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-gj9mj_f686dd59-557a-4156-bf11-a0face9d15ea/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:33 crc kubenswrapper[4719]: I1124 10:17:33.838565 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9c489706-83cc-4c99-9146-178f1efd5551/tempest-tests-tempest-tests-runner/0.log" Nov 24 10:17:33 crc kubenswrapper[4719]: I1124 10:17:33.893361 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-5pzbb_d2a2f001-9ea9-45a6-a2c6-6beb9de6b372/ssh-known-hosts-edpm-deployment/0.log" Nov 24 10:17:34 crc kubenswrapper[4719]: I1124 10:17:34.053976 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_373e0d8e-a16a-4daa-8b4c-895994f91783/test-operator-logs-container/0.log" Nov 24 10:17:34 crc kubenswrapper[4719]: I1124 10:17:34.271423 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-zxmjw_6d644fcc-6653-41e6-835d-430f31694bd1/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 10:17:47 crc kubenswrapper[4719]: I1124 10:17:47.869091 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_769e49a4-92ab-4c92-aebd-3c79f66a6227/memcached/0.log" Nov 24 10:18:04 crc kubenswrapper[4719]: I1124 10:18:04.240942 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-6hhz5_a2d8f034-9b1e-4b62-9c3a-ffc0c0379ad1/kube-rbac-proxy/0.log" Nov 24 10:18:04 crc kubenswrapper[4719]: I1124 10:18:04.296512 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-6hhz5_a2d8f034-9b1e-4b62-9c3a-ffc0c0379ad1/manager/0.log" Nov 24 10:18:04 crc kubenswrapper[4719]: I1124 10:18:04.423285 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-sf5qt_064a4ed4-46e3-4daf-8a9d-21c8475ba687/kube-rbac-proxy/0.log" Nov 24 10:18:04 crc kubenswrapper[4719]: I1124 10:18:04.517364 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-sf5qt_064a4ed4-46e3-4daf-8a9d-21c8475ba687/manager/0.log" Nov 24 10:18:04 crc kubenswrapper[4719]: I1124 10:18:04.562248 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:18:04 crc kubenswrapper[4719]: I1124 10:18:04.562302 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:18:04 crc kubenswrapper[4719]: I1124 10:18:04.660991 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-tjjkt_9d35d376-e7fb-41da-bf47-efd2e5f3ea57/kube-rbac-proxy/0.log" Nov 24 10:18:04 crc kubenswrapper[4719]: I1124 10:18:04.667117 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-tjjkt_9d35d376-e7fb-41da-bf47-efd2e5f3ea57/manager/0.log" Nov 24 10:18:04 crc kubenswrapper[4719]: I1124 10:18:04.853000 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp_f43e7773-89ab-406b-a3dc-5e20a490eafc/util/0.log" Nov 24 10:18:05 crc kubenswrapper[4719]: I1124 10:18:05.058123 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp_f43e7773-89ab-406b-a3dc-5e20a490eafc/pull/0.log" Nov 24 10:18:05 crc kubenswrapper[4719]: I1124 10:18:05.100102 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp_f43e7773-89ab-406b-a3dc-5e20a490eafc/util/0.log" Nov 24 10:18:05 crc kubenswrapper[4719]: I1124 10:18:05.129256 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp_f43e7773-89ab-406b-a3dc-5e20a490eafc/pull/0.log" Nov 24 10:18:05 crc kubenswrapper[4719]: I1124 10:18:05.370661 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp_f43e7773-89ab-406b-a3dc-5e20a490eafc/util/0.log" Nov 24 10:18:05 crc kubenswrapper[4719]: I1124 10:18:05.370935 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp_f43e7773-89ab-406b-a3dc-5e20a490eafc/pull/0.log" Nov 24 10:18:05 crc kubenswrapper[4719]: I1124 10:18:05.379480 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_eb43bae5e84b248e1aa63efc2800ff09efd3ed0938ade6192596eaf85cwprvp_f43e7773-89ab-406b-a3dc-5e20a490eafc/extract/0.log" Nov 24 10:18:05 crc kubenswrapper[4719]: I1124 10:18:05.631188 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-c9h59_5dce0610-7470-47d2-ae74-ca7fccb82b1f/manager/0.log" Nov 24 10:18:05 crc kubenswrapper[4719]: I1124 10:18:05.639815 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-c9h59_5dce0610-7470-47d2-ae74-ca7fccb82b1f/kube-rbac-proxy/0.log" Nov 24 10:18:05 crc kubenswrapper[4719]: I1124 10:18:05.681757 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-xkfjt_5a2058d2-1589-484e-a5a1-de7e31af1a63/kube-rbac-proxy/0.log" Nov 24 10:18:05 crc kubenswrapper[4719]: I1124 10:18:05.851623 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-xkfjt_5a2058d2-1589-484e-a5a1-de7e31af1a63/manager/0.log" Nov 24 10:18:06 crc kubenswrapper[4719]: I1124 10:18:06.012636 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-j22wh_9d835ba0-d338-45db-b417-7087d4cced01/kube-rbac-proxy/0.log" Nov 24 10:18:06 crc kubenswrapper[4719]: I1124 10:18:06.091701 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-j22wh_9d835ba0-d338-45db-b417-7087d4cced01/manager/0.log" Nov 24 10:18:06 crc kubenswrapper[4719]: I1124 10:18:06.220490 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-fhb77_08979ac6-d1d0-4ef7-8996-5b02e8e8dae6/kube-rbac-proxy/0.log" Nov 24 10:18:06 crc kubenswrapper[4719]: I1124 10:18:06.417538 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-fhb77_08979ac6-d1d0-4ef7-8996-5b02e8e8dae6/manager/0.log" Nov 24 10:18:06 crc kubenswrapper[4719]: I1124 10:18:06.460251 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-4sxvh_231d0c7b-d43e-4169-8b4e-940289894809/kube-rbac-proxy/0.log" Nov 24 10:18:06 crc kubenswrapper[4719]: I1124 10:18:06.519558 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-4sxvh_231d0c7b-d43e-4169-8b4e-940289894809/manager/0.log" Nov 24 10:18:06 crc kubenswrapper[4719]: I1124 10:18:06.680121 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-lsd4k_17ddd27a-66d1-4d80-abc7-80fde501fa8d/kube-rbac-proxy/0.log" Nov 24 10:18:06 crc kubenswrapper[4719]: I1124 10:18:06.782733 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-lsd4k_17ddd27a-66d1-4d80-abc7-80fde501fa8d/manager/0.log" Nov 24 10:18:06 crc kubenswrapper[4719]: I1124 10:18:06.861952 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-lz2r8_23502fbc-6d87-4ca2-80b3-d5af1e94205e/kube-rbac-proxy/0.log" Nov 24 10:18:06 crc kubenswrapper[4719]: I1124 10:18:06.935317 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-lz2r8_23502fbc-6d87-4ca2-80b3-d5af1e94205e/manager/0.log" Nov 24 10:18:07 crc kubenswrapper[4719]: I1124 10:18:07.080378 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-r2r85_a0a59a11-1bf3-4ff8-8496-9414bc0ae549/kube-rbac-proxy/0.log" Nov 24 10:18:07 crc kubenswrapper[4719]: I1124 10:18:07.127303 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-r2r85_a0a59a11-1bf3-4ff8-8496-9414bc0ae549/manager/0.log" Nov 24 10:18:07 crc kubenswrapper[4719]: I1124 10:18:07.622557 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-lthw6_30241c11-005e-4410-ad1a-71d6c5c0910f/kube-rbac-proxy/0.log" Nov 24 10:18:07 crc kubenswrapper[4719]: I1124 10:18:07.743669 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-lthw6_30241c11-005e-4410-ad1a-71d6c5c0910f/manager/0.log" Nov 24 10:18:07 crc kubenswrapper[4719]: I1124 10:18:07.872626 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-plrvj_070e32a3-4fa9-4ab4-9e55-d76c0c87db3c/kube-rbac-proxy/0.log" Nov 24 10:18:07 crc kubenswrapper[4719]: I1124 10:18:07.970091 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-plrvj_070e32a3-4fa9-4ab4-9e55-d76c0c87db3c/manager/0.log" Nov 24 10:18:08 crc kubenswrapper[4719]: I1124 10:18:08.003901 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-rnvl8_1d3d3b38-b3f5-49cc-aa73-40fb03afd3ce/kube-rbac-proxy/0.log" Nov 24 10:18:08 crc kubenswrapper[4719]: I1124 10:18:08.129361 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-rnvl8_1d3d3b38-b3f5-49cc-aa73-40fb03afd3ce/manager/0.log" Nov 24 10:18:08 crc kubenswrapper[4719]: I1124 10:18:08.237813 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-lf45p_643149e5-3960-4912-a497-c0cb9c0e722f/kube-rbac-proxy/0.log" Nov 24 10:18:08 crc kubenswrapper[4719]: I1124 10:18:08.249627 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-lf45p_643149e5-3960-4912-a497-c0cb9c0e722f/manager/0.log" Nov 24 10:18:08 crc kubenswrapper[4719]: I1124 10:18:08.418976 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5f88c7d9f9-n97nx_37253c68-54fd-490c-9486-f2a4f2ffe834/kube-rbac-proxy/0.log" Nov 24 10:18:08 crc kubenswrapper[4719]: I1124 10:18:08.652478 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-56cb4fc9f6-bx26b_2065277b-46c2-4b27-9458-f671c1319c76/kube-rbac-proxy/0.log" Nov 24 10:18:08 crc kubenswrapper[4719]: I1124 10:18:08.870495 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-56cb4fc9f6-bx26b_2065277b-46c2-4b27-9458-f671c1319c76/operator/0.log" Nov 24 10:18:09 crc kubenswrapper[4719]: I1124 10:18:09.372997 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-gqnbl_c4688244-99a9-4a75-8501-b1062f24b517/kube-rbac-proxy/0.log" Nov 24 10:18:09 crc kubenswrapper[4719]: I1124 10:18:09.421352 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-czgfr_96d6d0aa-864c-432b-a1c1-5eef084a21b1/registry-server/0.log" Nov 24 10:18:09 crc kubenswrapper[4719]: I1124 10:18:09.667505 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-gqnbl_c4688244-99a9-4a75-8501-b1062f24b517/manager/0.log" Nov 24 10:18:09 crc kubenswrapper[4719]: I1124 10:18:09.668939 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-d4vvj_a951b65e-e9bd-43bc-9fa0-673642653e4c/kube-rbac-proxy/0.log" Nov 24 10:18:09 crc kubenswrapper[4719]: I1124 10:18:09.731784 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-d4vvj_a951b65e-e9bd-43bc-9fa0-673642653e4c/manager/0.log" Nov 24 10:18:09 crc kubenswrapper[4719]: I1124 10:18:09.925186 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5f88c7d9f9-n97nx_37253c68-54fd-490c-9486-f2a4f2ffe834/manager/0.log" Nov 24 10:18:09 crc kubenswrapper[4719]: I1124 10:18:09.928459 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-wsjxj_33185bd6-40f2-4fb4-83b0-dd469f48598f/operator/0.log" Nov 24 10:18:10 crc kubenswrapper[4719]: I1124 10:18:10.012664 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-tlsj6_3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b/kube-rbac-proxy/0.log" Nov 24 10:18:10 crc kubenswrapper[4719]: I1124 10:18:10.153138 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-m828t_714fe5a8-a778-4366-8823-868dd1210515/kube-rbac-proxy/0.log" Nov 24 10:18:10 crc kubenswrapper[4719]: I1124 10:18:10.158612 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-tlsj6_3d89aa5f-f2b5-4752-b2ed-05a38ceb6f4b/manager/0.log" Nov 24 10:18:10 crc kubenswrapper[4719]: I1124 10:18:10.243483 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6d4bf84b58-m828t_714fe5a8-a778-4366-8823-868dd1210515/manager/0.log" Nov 24 10:18:10 crc kubenswrapper[4719]: I1124 10:18:10.386658 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-bks8t_7cfebe98-a194-4c28-861f-a80f9f9f22de/kube-rbac-proxy/0.log" Nov 24 10:18:10 crc kubenswrapper[4719]: I1124 10:18:10.404878 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-bks8t_7cfebe98-a194-4c28-861f-a80f9f9f22de/manager/0.log" Nov 24 10:18:10 crc kubenswrapper[4719]: I1124 10:18:10.460296 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-br6f4_d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc/kube-rbac-proxy/0.log" Nov 24 10:18:10 crc kubenswrapper[4719]: I1124 10:18:10.507129 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-br6f4_d43e9aa8-c51e-4f12-8b7c-992c1d3fabcc/manager/0.log" Nov 24 10:18:29 crc kubenswrapper[4719]: I1124 10:18:29.353409 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jpl9f_f42a4caa-e790-4ec2-a6fd-28d97cafcf32/control-plane-machine-set-operator/0.log" Nov 24 10:18:29 crc kubenswrapper[4719]: I1124 10:18:29.566331 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jkf8p_613468f4-6a02-4828-8873-01bccb4b2c43/kube-rbac-proxy/0.log" Nov 24 10:18:29 crc kubenswrapper[4719]: I1124 10:18:29.606912 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jkf8p_613468f4-6a02-4828-8873-01bccb4b2c43/machine-api-operator/0.log" Nov 24 10:18:34 crc kubenswrapper[4719]: I1124 10:18:34.562274 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:18:34 crc kubenswrapper[4719]: I1124 10:18:34.563713 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:18:41 crc kubenswrapper[4719]: I1124 10:18:41.231825 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-rwrqz_6810bbaf-a058-4255-a776-13435cfd7f16/cert-manager-controller/0.log" Nov 24 10:18:41 crc kubenswrapper[4719]: I1124 10:18:41.446316 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-qg4fz_2e8b2163-ffd6-4935-a172-bdae97882475/cert-manager-cainjector/0.log" Nov 24 10:18:41 crc kubenswrapper[4719]: I1124 10:18:41.467423 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-w9hp2_55b792be-fd7f-49c7-b9c9-e90acd66701a/cert-manager-webhook/0.log" Nov 24 10:18:56 crc kubenswrapper[4719]: I1124 10:18:56.641780 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-4ssqk_789cda50-c0b4-40be-88a7-9af3409bc49c/nmstate-console-plugin/0.log" Nov 24 10:18:56 crc kubenswrapper[4719]: I1124 10:18:56.776459 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-dd5zz_6b698c0f-63ea-4883-8771-f8b53718d191/nmstate-handler/0.log" Nov 24 10:18:56 crc kubenswrapper[4719]: I1124 10:18:56.942490 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-r5mnn_e0130b51-d625-42b0-9f57-018da660dddd/kube-rbac-proxy/0.log" Nov 24 10:18:57 crc kubenswrapper[4719]: I1124 10:18:57.069452 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-r5mnn_e0130b51-d625-42b0-9f57-018da660dddd/nmstate-metrics/0.log" Nov 24 10:18:57 crc kubenswrapper[4719]: I1124 10:18:57.122360 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-2w459_875211b7-4698-4cb8-b214-1665dd3a1a77/nmstate-operator/0.log" Nov 24 10:18:57 crc kubenswrapper[4719]: I1124 10:18:57.315963 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-bxtbn_a11d83d8-730f-4b57-bc95-e0506f69539d/nmstate-webhook/0.log" Nov 24 10:19:04 crc kubenswrapper[4719]: I1124 10:19:04.562097 4719 patch_prober.go:28] interesting pod/machine-config-daemon-hnkb6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 10:19:04 crc kubenswrapper[4719]: I1124 10:19:04.562758 4719 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 10:19:04 crc kubenswrapper[4719]: I1124 10:19:04.562818 4719 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" Nov 24 10:19:04 crc kubenswrapper[4719]: I1124 10:19:04.563681 4719 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e"} pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 10:19:04 crc kubenswrapper[4719]: I1124 10:19:04.563759 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerName="machine-config-daemon" containerID="cri-o://65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" gracePeriod=600 Nov 24 10:19:04 crc kubenswrapper[4719]: E1124 10:19:04.691746 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:19:04 crc kubenswrapper[4719]: I1124 10:19:04.801089 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe015f89-bb6b-4fa1-b687-192013956ed6" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" exitCode=0 Nov 24 10:19:04 crc kubenswrapper[4719]: I1124 10:19:04.801148 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerDied","Data":"65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e"} Nov 24 10:19:04 crc kubenswrapper[4719]: I1124 10:19:04.801181 4719 scope.go:117] "RemoveContainer" containerID="7c367fa3d99ee232632dd218f86db975241bce842b32aed9d95c60ebe991c37c" Nov 24 10:19:04 crc kubenswrapper[4719]: I1124 10:19:04.801768 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:19:04 crc kubenswrapper[4719]: E1124 10:19:04.801999 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:19:12 crc kubenswrapper[4719]: I1124 10:19:12.988108 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-2d8hg_89bc3754-b51b-44ed-9c94-5d7f074446e2/controller/0.log" Nov 24 10:19:13 crc kubenswrapper[4719]: I1124 10:19:13.005075 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-2d8hg_89bc3754-b51b-44ed-9c94-5d7f074446e2/kube-rbac-proxy/0.log" Nov 24 10:19:13 crc kubenswrapper[4719]: I1124 10:19:13.150180 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-frr-files/0.log" Nov 24 10:19:13 crc kubenswrapper[4719]: I1124 10:19:13.328289 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-frr-files/0.log" Nov 24 10:19:13 crc kubenswrapper[4719]: I1124 10:19:13.359538 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-reloader/0.log" Nov 24 10:19:13 crc kubenswrapper[4719]: I1124 10:19:13.362692 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-metrics/0.log" Nov 24 10:19:13 crc kubenswrapper[4719]: I1124 10:19:13.425679 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-reloader/0.log" Nov 24 10:19:13 crc kubenswrapper[4719]: I1124 10:19:13.560951 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-reloader/0.log" Nov 24 10:19:13 crc kubenswrapper[4719]: I1124 10:19:13.646712 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-metrics/0.log" Nov 24 10:19:13 crc kubenswrapper[4719]: I1124 10:19:13.650413 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-metrics/0.log" Nov 24 10:19:13 crc kubenswrapper[4719]: I1124 10:19:13.650498 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-frr-files/0.log" Nov 24 10:19:14 crc kubenswrapper[4719]: I1124 10:19:14.361113 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-frr-files/0.log" Nov 24 10:19:14 crc kubenswrapper[4719]: I1124 10:19:14.364649 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-reloader/0.log" Nov 24 10:19:14 crc kubenswrapper[4719]: I1124 10:19:14.420548 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/cp-metrics/0.log" Nov 24 10:19:14 crc kubenswrapper[4719]: I1124 10:19:14.484529 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/controller/0.log" Nov 24 10:19:14 crc kubenswrapper[4719]: I1124 10:19:14.571792 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/frr-metrics/0.log" Nov 24 10:19:14 crc kubenswrapper[4719]: I1124 10:19:14.623723 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/kube-rbac-proxy/0.log" Nov 24 10:19:14 crc kubenswrapper[4719]: I1124 10:19:14.777264 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/kube-rbac-proxy-frr/0.log" Nov 24 10:19:14 crc kubenswrapper[4719]: I1124 10:19:14.814939 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/reloader/0.log" Nov 24 10:19:15 crc kubenswrapper[4719]: I1124 10:19:15.125880 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-s55w7_c3fe3e56-b4b2-48c9-9b95-5aa984326faa/frr-k8s-webhook-server/0.log" Nov 24 10:19:15 crc kubenswrapper[4719]: I1124 10:19:15.375843 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-c6ccddcb9-hhfps_053b9219-602e-4d52-af3d-a6e039be213e/manager/0.log" Nov 24 10:19:15 crc kubenswrapper[4719]: I1124 10:19:15.868094 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-596c48c889-kksvs_fc753907-15ea-4768-8c53-e78830249c42/webhook-server/0.log" Nov 24 10:19:16 crc kubenswrapper[4719]: I1124 10:19:16.112136 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6lqkr_ce9d612a-d5e7-4ab8-809e-97155ecda8ef/kube-rbac-proxy/0.log" Nov 24 10:19:16 crc kubenswrapper[4719]: I1124 10:19:16.300061 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t9glv_5dddbe20-c847-452a-ae82-5c12dc74d379/frr/0.log" Nov 24 10:19:16 crc kubenswrapper[4719]: I1124 10:19:16.506945 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6lqkr_ce9d612a-d5e7-4ab8-809e-97155ecda8ef/speaker/0.log" Nov 24 10:19:16 crc kubenswrapper[4719]: I1124 10:19:16.522867 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:19:16 crc kubenswrapper[4719]: E1124 10:19:16.523126 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:19:28 crc kubenswrapper[4719]: I1124 10:19:28.294097 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9_8267c94c-41ea-4889-bd9f-398571d09747/util/0.log" Nov 24 10:19:28 crc kubenswrapper[4719]: I1124 10:19:28.510324 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9_8267c94c-41ea-4889-bd9f-398571d09747/pull/0.log" Nov 24 10:19:28 crc kubenswrapper[4719]: I1124 10:19:28.512003 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9_8267c94c-41ea-4889-bd9f-398571d09747/util/0.log" Nov 24 10:19:28 crc kubenswrapper[4719]: I1124 10:19:28.539205 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9_8267c94c-41ea-4889-bd9f-398571d09747/pull/0.log" Nov 24 10:19:29 crc kubenswrapper[4719]: I1124 10:19:29.071340 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9_8267c94c-41ea-4889-bd9f-398571d09747/pull/0.log" Nov 24 10:19:29 crc kubenswrapper[4719]: I1124 10:19:29.113995 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9_8267c94c-41ea-4889-bd9f-398571d09747/extract/0.log" Nov 24 10:19:29 crc kubenswrapper[4719]: I1124 10:19:29.140492 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772evzhq9_8267c94c-41ea-4889-bd9f-398571d09747/util/0.log" Nov 24 10:19:29 crc kubenswrapper[4719]: I1124 10:19:29.285418 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k8b6n_d4fa8925-5590-43e3-b4a1-4c1bda621334/extract-utilities/0.log" Nov 24 10:19:29 crc kubenswrapper[4719]: I1124 10:19:29.487285 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k8b6n_d4fa8925-5590-43e3-b4a1-4c1bda621334/extract-utilities/0.log" Nov 24 10:19:29 crc kubenswrapper[4719]: I1124 10:19:29.493996 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k8b6n_d4fa8925-5590-43e3-b4a1-4c1bda621334/extract-content/0.log" Nov 24 10:19:29 crc kubenswrapper[4719]: I1124 10:19:29.520859 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:19:29 crc kubenswrapper[4719]: E1124 10:19:29.521168 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:19:29 crc kubenswrapper[4719]: I1124 10:19:29.523175 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k8b6n_d4fa8925-5590-43e3-b4a1-4c1bda621334/extract-content/0.log" Nov 24 10:19:29 crc kubenswrapper[4719]: I1124 10:19:29.673268 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k8b6n_d4fa8925-5590-43e3-b4a1-4c1bda621334/extract-utilities/0.log" Nov 24 10:19:29 crc kubenswrapper[4719]: I1124 10:19:29.693844 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k8b6n_d4fa8925-5590-43e3-b4a1-4c1bda621334/extract-content/0.log" Nov 24 10:19:29 crc kubenswrapper[4719]: I1124 10:19:29.901308 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhxvl_55e4ac5d-677d-41b4-b3c8-adaac9928f7d/extract-utilities/0.log" Nov 24 10:19:30 crc kubenswrapper[4719]: I1124 10:19:30.335705 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-k8b6n_d4fa8925-5590-43e3-b4a1-4c1bda621334/registry-server/0.log" Nov 24 10:19:30 crc kubenswrapper[4719]: I1124 10:19:30.775553 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhxvl_55e4ac5d-677d-41b4-b3c8-adaac9928f7d/extract-content/0.log" Nov 24 10:19:30 crc kubenswrapper[4719]: I1124 10:19:30.812802 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhxvl_55e4ac5d-677d-41b4-b3c8-adaac9928f7d/extract-utilities/0.log" Nov 24 10:19:30 crc kubenswrapper[4719]: I1124 10:19:30.831387 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhxvl_55e4ac5d-677d-41b4-b3c8-adaac9928f7d/extract-content/0.log" Nov 24 10:19:31 crc kubenswrapper[4719]: I1124 10:19:31.131837 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhxvl_55e4ac5d-677d-41b4-b3c8-adaac9928f7d/extract-utilities/0.log" Nov 24 10:19:31 crc kubenswrapper[4719]: I1124 10:19:31.133970 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhxvl_55e4ac5d-677d-41b4-b3c8-adaac9928f7d/extract-content/0.log" Nov 24 10:19:31 crc kubenswrapper[4719]: I1124 10:19:31.392104 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv_441dcc7a-e87d-4f62-a1e8-79ec5e961ce3/util/0.log" Nov 24 10:19:31 crc kubenswrapper[4719]: I1124 10:19:31.666972 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv_441dcc7a-e87d-4f62-a1e8-79ec5e961ce3/util/0.log" Nov 24 10:19:31 crc kubenswrapper[4719]: I1124 10:19:31.718881 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhxvl_55e4ac5d-677d-41b4-b3c8-adaac9928f7d/registry-server/0.log" Nov 24 10:19:31 crc kubenswrapper[4719]: I1124 10:19:31.730746 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv_441dcc7a-e87d-4f62-a1e8-79ec5e961ce3/pull/0.log" Nov 24 10:19:31 crc kubenswrapper[4719]: I1124 10:19:31.751167 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv_441dcc7a-e87d-4f62-a1e8-79ec5e961ce3/pull/0.log" Nov 24 10:19:31 crc kubenswrapper[4719]: I1124 10:19:31.895695 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv_441dcc7a-e87d-4f62-a1e8-79ec5e961ce3/util/0.log" Nov 24 10:19:31 crc kubenswrapper[4719]: I1124 10:19:31.931902 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv_441dcc7a-e87d-4f62-a1e8-79ec5e961ce3/extract/0.log" Nov 24 10:19:31 crc kubenswrapper[4719]: I1124 10:19:31.935135 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6qvgvv_441dcc7a-e87d-4f62-a1e8-79ec5e961ce3/pull/0.log" Nov 24 10:19:32 crc kubenswrapper[4719]: I1124 10:19:32.069010 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mlglm_304abde6-d85e-4425-93f5-af2b501ab1c9/marketplace-operator/0.log" Nov 24 10:19:32 crc kubenswrapper[4719]: I1124 10:19:32.096904 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54tzs_56525057-4157-4fce-9288-ddae977d1037/extract-utilities/0.log" Nov 24 10:19:32 crc kubenswrapper[4719]: I1124 10:19:32.292828 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54tzs_56525057-4157-4fce-9288-ddae977d1037/extract-utilities/0.log" Nov 24 10:19:32 crc kubenswrapper[4719]: I1124 10:19:32.329006 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54tzs_56525057-4157-4fce-9288-ddae977d1037/extract-content/0.log" Nov 24 10:19:32 crc kubenswrapper[4719]: I1124 10:19:32.354660 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54tzs_56525057-4157-4fce-9288-ddae977d1037/extract-content/0.log" Nov 24 10:19:32 crc kubenswrapper[4719]: I1124 10:19:32.465867 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54tzs_56525057-4157-4fce-9288-ddae977d1037/extract-utilities/0.log" Nov 24 10:19:32 crc kubenswrapper[4719]: I1124 10:19:32.487473 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54tzs_56525057-4157-4fce-9288-ddae977d1037/extract-content/0.log" Nov 24 10:19:32 crc kubenswrapper[4719]: I1124 10:19:32.628834 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h69nz_2d0dbb1b-45d0-4aa1-b76e-723a630b9105/extract-utilities/0.log" Nov 24 10:19:32 crc kubenswrapper[4719]: I1124 10:19:32.701455 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-54tzs_56525057-4157-4fce-9288-ddae977d1037/registry-server/0.log" Nov 24 10:19:32 crc kubenswrapper[4719]: I1124 10:19:32.814954 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h69nz_2d0dbb1b-45d0-4aa1-b76e-723a630b9105/extract-content/0.log" Nov 24 10:19:32 crc kubenswrapper[4719]: I1124 10:19:32.815082 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h69nz_2d0dbb1b-45d0-4aa1-b76e-723a630b9105/extract-utilities/0.log" Nov 24 10:19:32 crc kubenswrapper[4719]: I1124 10:19:32.816343 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h69nz_2d0dbb1b-45d0-4aa1-b76e-723a630b9105/extract-content/0.log" Nov 24 10:19:32 crc kubenswrapper[4719]: I1124 10:19:32.989499 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h69nz_2d0dbb1b-45d0-4aa1-b76e-723a630b9105/extract-content/0.log" Nov 24 10:19:32 crc kubenswrapper[4719]: I1124 10:19:32.995337 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h69nz_2d0dbb1b-45d0-4aa1-b76e-723a630b9105/extract-utilities/0.log" Nov 24 10:19:33 crc kubenswrapper[4719]: I1124 10:19:33.432467 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-h69nz_2d0dbb1b-45d0-4aa1-b76e-723a630b9105/registry-server/0.log" Nov 24 10:19:43 crc kubenswrapper[4719]: I1124 10:19:43.520867 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:19:43 crc kubenswrapper[4719]: E1124 10:19:43.521687 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:19:58 crc kubenswrapper[4719]: I1124 10:19:58.523313 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:19:58 crc kubenswrapper[4719]: E1124 10:19:58.524071 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:20:08 crc kubenswrapper[4719]: E1124 10:20:08.797476 4719 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.26:53726->38.129.56.26:46637: read tcp 38.129.56.26:53726->38.129.56.26:46637: read: connection reset by peer Nov 24 10:20:10 crc kubenswrapper[4719]: I1124 10:20:10.521501 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:20:10 crc kubenswrapper[4719]: E1124 10:20:10.521978 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:20:24 crc kubenswrapper[4719]: I1124 10:20:24.529401 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:20:24 crc kubenswrapper[4719]: E1124 10:20:24.530349 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:20:38 crc kubenswrapper[4719]: I1124 10:20:38.521504 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:20:38 crc kubenswrapper[4719]: E1124 10:20:38.522484 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:20:50 crc kubenswrapper[4719]: I1124 10:20:50.520913 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:20:50 crc kubenswrapper[4719]: E1124 10:20:50.521563 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:21:02 crc kubenswrapper[4719]: I1124 10:21:02.521069 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:21:02 crc kubenswrapper[4719]: E1124 10:21:02.521737 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:21:14 crc kubenswrapper[4719]: I1124 10:21:14.532421 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:21:14 crc kubenswrapper[4719]: E1124 10:21:14.533864 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:21:28 crc kubenswrapper[4719]: I1124 10:21:28.522009 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:21:28 crc kubenswrapper[4719]: E1124 10:21:28.523299 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:21:41 crc kubenswrapper[4719]: I1124 10:21:41.522156 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:21:41 crc kubenswrapper[4719]: E1124 10:21:41.523016 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:21:55 crc kubenswrapper[4719]: I1124 10:21:55.297961 4719 generic.go:334] "Generic (PLEG): container finished" podID="9572f9fd-5e52-4924-87c5-b85c9c81fc2e" containerID="a2dc5aaf826f75d1c69f76d9eeadd84f9685a05cab8eeb513c2a6a72a0237af3" exitCode=0 Nov 24 10:21:55 crc kubenswrapper[4719]: I1124 10:21:55.298574 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vkqb2/must-gather-g872x" event={"ID":"9572f9fd-5e52-4924-87c5-b85c9c81fc2e","Type":"ContainerDied","Data":"a2dc5aaf826f75d1c69f76d9eeadd84f9685a05cab8eeb513c2a6a72a0237af3"} Nov 24 10:21:55 crc kubenswrapper[4719]: I1124 10:21:55.299193 4719 scope.go:117] "RemoveContainer" containerID="a2dc5aaf826f75d1c69f76d9eeadd84f9685a05cab8eeb513c2a6a72a0237af3" Nov 24 10:21:55 crc kubenswrapper[4719]: I1124 10:21:55.791907 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vkqb2_must-gather-g872x_9572f9fd-5e52-4924-87c5-b85c9c81fc2e/gather/0.log" Nov 24 10:21:56 crc kubenswrapper[4719]: I1124 10:21:56.521240 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:21:56 crc kubenswrapper[4719]: E1124 10:21:56.522324 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:22:09 crc kubenswrapper[4719]: I1124 10:22:09.363314 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vkqb2/must-gather-g872x"] Nov 24 10:22:09 crc kubenswrapper[4719]: I1124 10:22:09.364113 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vkqb2/must-gather-g872x" podUID="9572f9fd-5e52-4924-87c5-b85c9c81fc2e" containerName="copy" containerID="cri-o://db56b73299c07fb0e5fdee2df14af6cf4ec1664a055d4abc13ed6a36a1cbc8b7" gracePeriod=2 Nov 24 10:22:09 crc kubenswrapper[4719]: I1124 10:22:09.371815 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vkqb2/must-gather-g872x"] Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.091408 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vkqb2_must-gather-g872x_9572f9fd-5e52-4924-87c5-b85c9c81fc2e/copy/0.log" Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.092491 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/must-gather-g872x" Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.202230 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvrbk\" (UniqueName: \"kubernetes.io/projected/9572f9fd-5e52-4924-87c5-b85c9c81fc2e-kube-api-access-nvrbk\") pod \"9572f9fd-5e52-4924-87c5-b85c9c81fc2e\" (UID: \"9572f9fd-5e52-4924-87c5-b85c9c81fc2e\") " Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.202317 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9572f9fd-5e52-4924-87c5-b85c9c81fc2e-must-gather-output\") pod \"9572f9fd-5e52-4924-87c5-b85c9c81fc2e\" (UID: \"9572f9fd-5e52-4924-87c5-b85c9c81fc2e\") " Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.210178 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9572f9fd-5e52-4924-87c5-b85c9c81fc2e-kube-api-access-nvrbk" (OuterVolumeSpecName: "kube-api-access-nvrbk") pod "9572f9fd-5e52-4924-87c5-b85c9c81fc2e" (UID: "9572f9fd-5e52-4924-87c5-b85c9c81fc2e"). InnerVolumeSpecName "kube-api-access-nvrbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.304874 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvrbk\" (UniqueName: \"kubernetes.io/projected/9572f9fd-5e52-4924-87c5-b85c9c81fc2e-kube-api-access-nvrbk\") on node \"crc\" DevicePath \"\"" Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.386398 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9572f9fd-5e52-4924-87c5-b85c9c81fc2e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9572f9fd-5e52-4924-87c5-b85c9c81fc2e" (UID: "9572f9fd-5e52-4924-87c5-b85c9c81fc2e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.405765 4719 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9572f9fd-5e52-4924-87c5-b85c9c81fc2e-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.458399 4719 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vkqb2_must-gather-g872x_9572f9fd-5e52-4924-87c5-b85c9c81fc2e/copy/0.log" Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.458995 4719 generic.go:334] "Generic (PLEG): container finished" podID="9572f9fd-5e52-4924-87c5-b85c9c81fc2e" containerID="db56b73299c07fb0e5fdee2df14af6cf4ec1664a055d4abc13ed6a36a1cbc8b7" exitCode=143 Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.459067 4719 scope.go:117] "RemoveContainer" containerID="db56b73299c07fb0e5fdee2df14af6cf4ec1664a055d4abc13ed6a36a1cbc8b7" Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.459140 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vkqb2/must-gather-g872x" Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.479473 4719 scope.go:117] "RemoveContainer" containerID="a2dc5aaf826f75d1c69f76d9eeadd84f9685a05cab8eeb513c2a6a72a0237af3" Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.526398 4719 scope.go:117] "RemoveContainer" containerID="db56b73299c07fb0e5fdee2df14af6cf4ec1664a055d4abc13ed6a36a1cbc8b7" Nov 24 10:22:10 crc kubenswrapper[4719]: E1124 10:22:10.533929 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db56b73299c07fb0e5fdee2df14af6cf4ec1664a055d4abc13ed6a36a1cbc8b7\": container with ID starting with db56b73299c07fb0e5fdee2df14af6cf4ec1664a055d4abc13ed6a36a1cbc8b7 not found: ID does not exist" containerID="db56b73299c07fb0e5fdee2df14af6cf4ec1664a055d4abc13ed6a36a1cbc8b7" Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.534099 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db56b73299c07fb0e5fdee2df14af6cf4ec1664a055d4abc13ed6a36a1cbc8b7"} err="failed to get container status \"db56b73299c07fb0e5fdee2df14af6cf4ec1664a055d4abc13ed6a36a1cbc8b7\": rpc error: code = NotFound desc = could not find container \"db56b73299c07fb0e5fdee2df14af6cf4ec1664a055d4abc13ed6a36a1cbc8b7\": container with ID starting with db56b73299c07fb0e5fdee2df14af6cf4ec1664a055d4abc13ed6a36a1cbc8b7 not found: ID does not exist" Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.534232 4719 scope.go:117] "RemoveContainer" containerID="a2dc5aaf826f75d1c69f76d9eeadd84f9685a05cab8eeb513c2a6a72a0237af3" Nov 24 10:22:10 crc kubenswrapper[4719]: E1124 10:22:10.534837 4719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2dc5aaf826f75d1c69f76d9eeadd84f9685a05cab8eeb513c2a6a72a0237af3\": container with ID starting with a2dc5aaf826f75d1c69f76d9eeadd84f9685a05cab8eeb513c2a6a72a0237af3 not found: ID does not exist" containerID="a2dc5aaf826f75d1c69f76d9eeadd84f9685a05cab8eeb513c2a6a72a0237af3" Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.534935 4719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2dc5aaf826f75d1c69f76d9eeadd84f9685a05cab8eeb513c2a6a72a0237af3"} err="failed to get container status \"a2dc5aaf826f75d1c69f76d9eeadd84f9685a05cab8eeb513c2a6a72a0237af3\": rpc error: code = NotFound desc = could not find container \"a2dc5aaf826f75d1c69f76d9eeadd84f9685a05cab8eeb513c2a6a72a0237af3\": container with ID starting with a2dc5aaf826f75d1c69f76d9eeadd84f9685a05cab8eeb513c2a6a72a0237af3 not found: ID does not exist" Nov 24 10:22:10 crc kubenswrapper[4719]: I1124 10:22:10.539908 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9572f9fd-5e52-4924-87c5-b85c9c81fc2e" path="/var/lib/kubelet/pods/9572f9fd-5e52-4924-87c5-b85c9c81fc2e/volumes" Nov 24 10:22:11 crc kubenswrapper[4719]: I1124 10:22:11.521660 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:22:11 crc kubenswrapper[4719]: E1124 10:22:11.522314 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:22:26 crc kubenswrapper[4719]: I1124 10:22:26.521559 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:22:26 crc kubenswrapper[4719]: E1124 10:22:26.522461 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:22:37 crc kubenswrapper[4719]: I1124 10:22:37.520588 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:22:37 crc kubenswrapper[4719]: E1124 10:22:37.521244 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:22:52 crc kubenswrapper[4719]: I1124 10:22:52.521211 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:22:52 crc kubenswrapper[4719]: E1124 10:22:52.522994 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:23:05 crc kubenswrapper[4719]: I1124 10:23:05.520704 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:23:05 crc kubenswrapper[4719]: E1124 10:23:05.521409 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:23:20 crc kubenswrapper[4719]: I1124 10:23:20.521567 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:23:20 crc kubenswrapper[4719]: E1124 10:23:20.522636 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:23:32 crc kubenswrapper[4719]: I1124 10:23:32.521266 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:23:32 crc kubenswrapper[4719]: E1124 10:23:32.524032 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:23:44 crc kubenswrapper[4719]: I1124 10:23:44.520506 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:23:44 crc kubenswrapper[4719]: E1124 10:23:44.521236 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.014488 4719 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5p69z"] Nov 24 10:23:51 crc kubenswrapper[4719]: E1124 10:23:51.015722 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9572f9fd-5e52-4924-87c5-b85c9c81fc2e" containerName="copy" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.015740 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="9572f9fd-5e52-4924-87c5-b85c9c81fc2e" containerName="copy" Nov 24 10:23:51 crc kubenswrapper[4719]: E1124 10:23:51.015766 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9572f9fd-5e52-4924-87c5-b85c9c81fc2e" containerName="gather" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.015774 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="9572f9fd-5e52-4924-87c5-b85c9c81fc2e" containerName="gather" Nov 24 10:23:51 crc kubenswrapper[4719]: E1124 10:23:51.015795 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" containerName="extract-utilities" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.015803 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" containerName="extract-utilities" Nov 24 10:23:51 crc kubenswrapper[4719]: E1124 10:23:51.015832 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" containerName="registry-server" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.015843 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" containerName="registry-server" Nov 24 10:23:51 crc kubenswrapper[4719]: E1124 10:23:51.015866 4719 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" containerName="extract-content" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.015875 4719 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" containerName="extract-content" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.016170 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="9572f9fd-5e52-4924-87c5-b85c9c81fc2e" containerName="copy" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.016187 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="9572f9fd-5e52-4924-87c5-b85c9c81fc2e" containerName="gather" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.016197 4719 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92bddb1-5b6f-4eea-bdad-8675ee0dc6e8" containerName="registry-server" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.020600 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.073906 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5p69z"] Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.079097 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbcdt\" (UniqueName: \"kubernetes.io/projected/fe6d5919-9a77-4dab-aefb-5f733b603ca3-kube-api-access-lbcdt\") pod \"certified-operators-5p69z\" (UID: \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\") " pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.079486 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe6d5919-9a77-4dab-aefb-5f733b603ca3-catalog-content\") pod \"certified-operators-5p69z\" (UID: \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\") " pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.079580 4719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe6d5919-9a77-4dab-aefb-5f733b603ca3-utilities\") pod \"certified-operators-5p69z\" (UID: \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\") " pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.181512 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe6d5919-9a77-4dab-aefb-5f733b603ca3-catalog-content\") pod \"certified-operators-5p69z\" (UID: \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\") " pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.181582 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe6d5919-9a77-4dab-aefb-5f733b603ca3-utilities\") pod \"certified-operators-5p69z\" (UID: \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\") " pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.181627 4719 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbcdt\" (UniqueName: \"kubernetes.io/projected/fe6d5919-9a77-4dab-aefb-5f733b603ca3-kube-api-access-lbcdt\") pod \"certified-operators-5p69z\" (UID: \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\") " pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.182523 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe6d5919-9a77-4dab-aefb-5f733b603ca3-catalog-content\") pod \"certified-operators-5p69z\" (UID: \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\") " pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.182588 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe6d5919-9a77-4dab-aefb-5f733b603ca3-utilities\") pod \"certified-operators-5p69z\" (UID: \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\") " pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.209836 4719 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbcdt\" (UniqueName: \"kubernetes.io/projected/fe6d5919-9a77-4dab-aefb-5f733b603ca3-kube-api-access-lbcdt\") pod \"certified-operators-5p69z\" (UID: \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\") " pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.342411 4719 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:23:51 crc kubenswrapper[4719]: I1124 10:23:51.830226 4719 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5p69z"] Nov 24 10:23:52 crc kubenswrapper[4719]: I1124 10:23:52.433764 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe6d5919-9a77-4dab-aefb-5f733b603ca3" containerID="b41b4da3976bcad4136dc63a54348e8a77e8507d0c6deba716d70d92422f0b33" exitCode=0 Nov 24 10:23:52 crc kubenswrapper[4719]: I1124 10:23:52.434078 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p69z" event={"ID":"fe6d5919-9a77-4dab-aefb-5f733b603ca3","Type":"ContainerDied","Data":"b41b4da3976bcad4136dc63a54348e8a77e8507d0c6deba716d70d92422f0b33"} Nov 24 10:23:52 crc kubenswrapper[4719]: I1124 10:23:52.434116 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p69z" event={"ID":"fe6d5919-9a77-4dab-aefb-5f733b603ca3","Type":"ContainerStarted","Data":"c616ae6a8f74dd79189690693750a0dae4fb636c63b00232bff4f2953d78a554"} Nov 24 10:23:52 crc kubenswrapper[4719]: I1124 10:23:52.436807 4719 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 10:23:53 crc kubenswrapper[4719]: I1124 10:23:53.443197 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p69z" event={"ID":"fe6d5919-9a77-4dab-aefb-5f733b603ca3","Type":"ContainerStarted","Data":"97fb63ed990f0b39b459680e86a35b3797253b0d981cd5f1216e568830f06bb2"} Nov 24 10:23:54 crc kubenswrapper[4719]: I1124 10:23:54.452459 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe6d5919-9a77-4dab-aefb-5f733b603ca3" containerID="97fb63ed990f0b39b459680e86a35b3797253b0d981cd5f1216e568830f06bb2" exitCode=0 Nov 24 10:23:54 crc kubenswrapper[4719]: I1124 10:23:54.452543 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p69z" event={"ID":"fe6d5919-9a77-4dab-aefb-5f733b603ca3","Type":"ContainerDied","Data":"97fb63ed990f0b39b459680e86a35b3797253b0d981cd5f1216e568830f06bb2"} Nov 24 10:23:55 crc kubenswrapper[4719]: I1124 10:23:55.467100 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p69z" event={"ID":"fe6d5919-9a77-4dab-aefb-5f733b603ca3","Type":"ContainerStarted","Data":"091eb9eeb9530cdd4f83bc6ecdfc0b98ecf63fdc31b11e43dbc6015ae40bfe23"} Nov 24 10:23:55 crc kubenswrapper[4719]: I1124 10:23:55.495213 4719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5p69z" podStartSLOduration=3.080193597 podStartE2EDuration="5.495187536s" podCreationTimestamp="2025-11-24 10:23:50 +0000 UTC" firstStartedPulling="2025-11-24 10:23:52.436518209 +0000 UTC m=+5408.767791461" lastFinishedPulling="2025-11-24 10:23:54.851512158 +0000 UTC m=+5411.182785400" observedRunningTime="2025-11-24 10:23:55.490715449 +0000 UTC m=+5411.821988741" watchObservedRunningTime="2025-11-24 10:23:55.495187536 +0000 UTC m=+5411.826460818" Nov 24 10:23:56 crc kubenswrapper[4719]: I1124 10:23:56.520522 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:23:56 crc kubenswrapper[4719]: E1124 10:23:56.520963 4719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hnkb6_openshift-machine-config-operator(fe015f89-bb6b-4fa1-b687-192013956ed6)\"" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" podUID="fe015f89-bb6b-4fa1-b687-192013956ed6" Nov 24 10:24:01 crc kubenswrapper[4719]: I1124 10:24:01.342958 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:24:01 crc kubenswrapper[4719]: I1124 10:24:01.345602 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:24:01 crc kubenswrapper[4719]: I1124 10:24:01.413025 4719 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:24:01 crc kubenswrapper[4719]: I1124 10:24:01.686773 4719 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:24:01 crc kubenswrapper[4719]: I1124 10:24:01.740997 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5p69z"] Nov 24 10:24:03 crc kubenswrapper[4719]: I1124 10:24:03.536735 4719 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5p69z" podUID="fe6d5919-9a77-4dab-aefb-5f733b603ca3" containerName="registry-server" containerID="cri-o://091eb9eeb9530cdd4f83bc6ecdfc0b98ecf63fdc31b11e43dbc6015ae40bfe23" gracePeriod=2 Nov 24 10:24:04 crc kubenswrapper[4719]: I1124 10:24:04.550106 4719 generic.go:334] "Generic (PLEG): container finished" podID="fe6d5919-9a77-4dab-aefb-5f733b603ca3" containerID="091eb9eeb9530cdd4f83bc6ecdfc0b98ecf63fdc31b11e43dbc6015ae40bfe23" exitCode=0 Nov 24 10:24:04 crc kubenswrapper[4719]: I1124 10:24:04.550227 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p69z" event={"ID":"fe6d5919-9a77-4dab-aefb-5f733b603ca3","Type":"ContainerDied","Data":"091eb9eeb9530cdd4f83bc6ecdfc0b98ecf63fdc31b11e43dbc6015ae40bfe23"} Nov 24 10:24:04 crc kubenswrapper[4719]: I1124 10:24:04.550409 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5p69z" event={"ID":"fe6d5919-9a77-4dab-aefb-5f733b603ca3","Type":"ContainerDied","Data":"c616ae6a8f74dd79189690693750a0dae4fb636c63b00232bff4f2953d78a554"} Nov 24 10:24:04 crc kubenswrapper[4719]: I1124 10:24:04.550419 4719 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c616ae6a8f74dd79189690693750a0dae4fb636c63b00232bff4f2953d78a554" Nov 24 10:24:04 crc kubenswrapper[4719]: I1124 10:24:04.551160 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:24:04 crc kubenswrapper[4719]: I1124 10:24:04.615027 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbcdt\" (UniqueName: \"kubernetes.io/projected/fe6d5919-9a77-4dab-aefb-5f733b603ca3-kube-api-access-lbcdt\") pod \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\" (UID: \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\") " Nov 24 10:24:04 crc kubenswrapper[4719]: I1124 10:24:04.615310 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe6d5919-9a77-4dab-aefb-5f733b603ca3-catalog-content\") pod \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\" (UID: \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\") " Nov 24 10:24:04 crc kubenswrapper[4719]: I1124 10:24:04.615342 4719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe6d5919-9a77-4dab-aefb-5f733b603ca3-utilities\") pod \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\" (UID: \"fe6d5919-9a77-4dab-aefb-5f733b603ca3\") " Nov 24 10:24:04 crc kubenswrapper[4719]: I1124 10:24:04.616191 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe6d5919-9a77-4dab-aefb-5f733b603ca3-utilities" (OuterVolumeSpecName: "utilities") pod "fe6d5919-9a77-4dab-aefb-5f733b603ca3" (UID: "fe6d5919-9a77-4dab-aefb-5f733b603ca3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:24:04 crc kubenswrapper[4719]: I1124 10:24:04.625317 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe6d5919-9a77-4dab-aefb-5f733b603ca3-kube-api-access-lbcdt" (OuterVolumeSpecName: "kube-api-access-lbcdt") pod "fe6d5919-9a77-4dab-aefb-5f733b603ca3" (UID: "fe6d5919-9a77-4dab-aefb-5f733b603ca3"). InnerVolumeSpecName "kube-api-access-lbcdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 10:24:04 crc kubenswrapper[4719]: I1124 10:24:04.666869 4719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe6d5919-9a77-4dab-aefb-5f733b603ca3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe6d5919-9a77-4dab-aefb-5f733b603ca3" (UID: "fe6d5919-9a77-4dab-aefb-5f733b603ca3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 10:24:04 crc kubenswrapper[4719]: I1124 10:24:04.718369 4719 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe6d5919-9a77-4dab-aefb-5f733b603ca3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 10:24:04 crc kubenswrapper[4719]: I1124 10:24:04.718402 4719 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe6d5919-9a77-4dab-aefb-5f733b603ca3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 10:24:04 crc kubenswrapper[4719]: I1124 10:24:04.718411 4719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbcdt\" (UniqueName: \"kubernetes.io/projected/fe6d5919-9a77-4dab-aefb-5f733b603ca3-kube-api-access-lbcdt\") on node \"crc\" DevicePath \"\"" Nov 24 10:24:05 crc kubenswrapper[4719]: I1124 10:24:05.559523 4719 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5p69z" Nov 24 10:24:05 crc kubenswrapper[4719]: I1124 10:24:05.600667 4719 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5p69z"] Nov 24 10:24:05 crc kubenswrapper[4719]: I1124 10:24:05.608612 4719 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5p69z"] Nov 24 10:24:06 crc kubenswrapper[4719]: I1124 10:24:06.534410 4719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe6d5919-9a77-4dab-aefb-5f733b603ca3" path="/var/lib/kubelet/pods/fe6d5919-9a77-4dab-aefb-5f733b603ca3/volumes" Nov 24 10:24:09 crc kubenswrapper[4719]: I1124 10:24:09.520640 4719 scope.go:117] "RemoveContainer" containerID="65f3afdcc661df4616abc5c91b442f22f62e7265225b71f279fb77fe79d4182e" Nov 24 10:24:10 crc kubenswrapper[4719]: I1124 10:24:10.615214 4719 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hnkb6" event={"ID":"fe015f89-bb6b-4fa1-b687-192013956ed6","Type":"ContainerStarted","Data":"42e072dfafc8b206b36db8e3bbac661681de02bc5d6fb8f0ccf07109f7043090"}